Blogs

Interesting Reads

We are a cloud native service company with an eye on the latest trend in the cloud industry.

500+

Blog Articles

20.02

Last Update


Blogs

search-icon
Why and How to Modernize your Legacy On-prem Databases with AWS Cloud
As organizations generate, process, and store ever-larger quantities of data, they need efficient databases to ensure the data security, confidentiality, and accessibility. In the past, these databases were part of their on-premises infrastructure. But these databases come with many constraints that today’s companies want freedom from. Cloud-based databases on AWS enable them to modernize their legacy databases and break free from their constraints. Here’s how.Why You Should Break Free from Legacy DatabasesLegacy on-prem databases were designed for static installations with a smaller user base where organizations had to do capacity planning and provision the database accordingly. But this caused practical problems. If the traffic was less than expected, resources remained idle so the company ended up overpaying but if the traffic was higher than expected, performance suffered. Today’s businesses – including yours – operate in a highly dynamic environment that these older databases cannot keep up with.For one, the amount of data being produced by organizations is fast outgrowing their on-premise capabilities to store it all. Moreover, if you create or use data-hungry and/or delay-sensitive applications; on-site databases will be too costly to procure and too complex to configure, administer, and maintain.Another problem is that expensive annual licenses for on-prem databases can lead to spiraling costs and limit your flexibility to scale the database up or down. Then there are the concerns about a business disruption resulting in costly downtime and potentially catastrophic data losses. Such disruptions are rare with cloud environments.Finally, modern development practices and an agile team can help you react quickly to changing requirements or market conditions. But a legacy database cannot support these requirements. And if you use a microservice architecture, you will need change-data-capture mechanisms and logical replication of data changes in the database. This is only possible with cloud-based databases.Why You Should Embrace AWS Cloud DatabasesSo now you know how a legacy database can cause more problems than it solves. Managing these databases can be incredibly difficult, time-consuming, and expensive – especially to run at scale with high availability and reliability.AWS offers high-performance, cost-effective, and fully managed databases so you no longer have to worry about the complexities of database management and data warehouse administration. These database solutions are purpose-built for organizations.You get fast, interactive query performance that’s 3-5x the performance of on-prem alternatives. Moreover, these solutions can support 20 million+ requests per second. In short, you get a cloud-based database that offers high performance at any scale.Reliability is another key differentiator of AWS databases. You can confidently run any kind of business-critical workload and build scalable and secure enterprise applications. Also, your data will be safeguarded by highly reliable AWS infrastructure in secure data centers.Finally, AWS database solutions like Amazon Aurora and DynamoDB provide all the security and availability of commercial on-prem databases – at 1/10th the cost. Amazon itself has cut its database operating costs by 50% by migrating a massive 50 petabytes of data from Oracle to AWS, so there’s no reason why your organization cannot achieve a similar outcome.By moving to AWS databases, Amazon has also reduced its database administration and hardware management costs and reduced the latency of its most critical services by 40% despite handling 2X transaction volumes.Amazon Aurora and DynamoDB: Built for the Cloud and for Modern OrganizationsAmazon Aurora is a cloud-native relational database compatible with both MySQL and PostgreSQL. It is designed to offer high performance and availability and up to 99.99% uptime SLA at a global scale – which bulky and unwieldly legacy databases cannot provide. This makes Amazon Aurora ideal for performance-intensive applications, Internet-scale applications, and critical workloads.Aurora has a purpose-built storage spanning multiple Availability Zones (AZ) for maximum availability and durability. Its log-based design further improves performance and reduces IOPS for both speed and cost-effectiveness. Further, your database storage will scale automatically with usage so there’s no need for overprovisioning to handle unexpected spikes.DynamoDB is also a fully managed database built for the cloud’s dynamic environment. This NoSQL database service offers single-digit millisecond performance and near-unlimited throughput. Moreover, it can scale up or down on-demand to effectively handle traffic peaks for many kinds of high-performance applications.DynamoDB also secures your data, both at-rest and in-transit. Like Aurora, it offers an SLA of up to 99.99% availability. It also offers:Continuous backups with Point-in-time recovery (PITR) to protect data from accidental writes or deletesAutomated multi-region replication with global tablesIn-memory caching to deliver fast read performance for tables at scaleData export tools to extract actionable insights from dataEasy capture of table changes with DynamoDB StreamsFurther, both Amazon Aurora and DynamoDB are based on pay-as-you-use pricing, meaning you only pay for the capacity your application uses. There are no annual licenses and you pay for read and write units, so you can convert your hefty capital expenditures (CAPEX) into more predictable operational expenditures (OPEX) for greater control over costs and revenues.Other AWS Services to Free You from the Burden of On-prem DatabasesSeveral AWS services are available to help you streamline the move from on-prem databases to AWS. For example, Amazon Simple Storage Service (Amazon S3) is ideal for the long-term storage of relational and non-relational data. It is built to retrieve and protect any amount of data from anywhere and for many use cases including data lakes, cloud-native applications, and mobile apps. Amazon S3 also offers high scalability, data availability, and performance to match your business requirements.Another useful service to help you break the shackles of on-prem databases is AWS Database Migration Service (AWS DMS). With AWS DMS, you can migrate any commercial or open-source database to AWS quickly and securely. You can also continuously replicate data with low latency and consolidate databases into a petabyte-scale data warehouse.Two other useful services to consider are Amazon Relational Database Service (Amazon RDS) and AWS Schema Conversion Tool (AWS SCT). Amazon RDS simplifies the setup, operation, and scaling of databases in the AWS cloud. It’s ideal to build web or mobile applications and you don’t have to worry about self-managing your database. AWS SCT converts the source database schema and most of the database code objects for more predictable heterogeneous database migrations.Both AWS DMS and AWS SCT are suitable for “self-service” migration. But if you prefer migration assistance, you can sign up with AWS Professional Services and choose an AWS-certified migration partner like Axcess.Break Free from Legacy Databases with AWS and AxcessIn the cloud era, traditional relational databases are outgunned and outmaneuvered. To take full advantage of the cloud’s flexibility, speed, and cost-effectiveness, consider transitioning from these legacy systems to cloud-native databases on AWS.Axcess can help you get started with the transition. As we have seen, cloud-based databases like Amazon Aurora and DynamoDB offer a host of advantages to prepare you for a cloud-driven future. Contact us to get started with your migration journey.
7 points to consider while building Infra as Code (IaC) on AWS
There are many questions in the air regarding architecting Infrastructure as code (IaC) and IaC pipelines. Adopting cloud and automation tools eases the complexity of infrastructure changes. However, improving consistency and reliability doesn't come out of the box in the software. It takes Architects to think through how they will use the tools to design the systems, processes, and discipline to use them effectively.A common problem with IaC design is that unintentionally as we introduce more and more components in the stack, our IaC becomes a monolith. In the past, I have seen companies who have put the best resources and humungous efforts in coding IaC templates/scripts for 50+ modules realizing that change-set management and risk of touching the same massive code base for every minor change is unmanageable. A wrongly designed IaC certainly makes life messier.Here I want to share my experience and a few important lessons learned while designing and optimizing many IaC solutions. Though I have tried to keep the points generic, I will be mentioning a few AWS Tools and services for practical examples:1. Follow Layered IaC Design Approach Categorize stack components in Hardly Changed, Infrequently Changes, and Frequently changed layers and decide appropriate deployment strategy and tool for every layer.2. Keep Loose cross References Instead of using tightly coupled references, i.e., the output of layer1 stack directly referred in layer2 stack, it's better to push layer1 output to global storage like Vault or AWS Parameter Store. We are not bound to use the same tool for layer1 and 2. Also, we can change params manually in case of any unavoidable situation OR Severity0 issue.3. Use Public Cloud Provider's native tools wherever possible“When it comes to IaC, at times, being "cloud agnostic" is an overvalued concept.”There is no simple way to write a cloud-agnostic deployment template. Better use the cloud provider's native tool. E.g., AWS's CDK gives off-the-shelf three types of constructs, i.e., Level1, Level2, and Level3. Level 1 resources are the same as CloudFormation resources, L2 are curated ones that encapsulate L1 resources, and Level3 creates an entire architecture for a particular use case. Using L2 and L3 resources eliminates the difficulty of managing complex cross-referencing by providing simplified curated resources.4. Use Nested templates with Modular Approach“Managing the entire IaC in a single file is an inefficient way.”Being modular helps in easy updates where any part can be changed without the risk of touching others.Other development considerations:Environment-specific Inputs should be saved outside the template and passed as Configuration Files. Use Unique environment suffix with every resource name. It's mainly for proper tagging and also helps to avoid any region-specific unique naming restrictions.Secrets should strictly be outside IaC templates and repo. We can use Vault or AWS secret-manager kind to services to manage secrets.5. Validate and Test before Execution“If Infra is version controlled and managed as code, testing and validating the code can't be overlooked.“Tools like AWS-cflint and AWS Taskcat helps in template validation. 6. Use Deploy Only Pipelines Maintain well-controlled Deploy Only Pipelines for Production to avoid arbitrary infrastructure changes.7. Run regular jobs to catch Drifts (if any)Yes, there will be drifts as it's impossible to handle every P0 production infra issue with IaC changes. We must be flexible for quick fixes but immediately add a story to enhance IaC. Implement IaC changes, test in lower environments, revert the manual shift, and rollout same via IaC in the following deployment window.
How to Understand and Evaluate S3 Costs over the Data Lifecycle
AWS Simple Storage Service—S3 in short—is a cheap, secure, durable, highly available object storage service for storing unlimited amount of data. S3 is simple to use and scales infinitely in terms of data storage. This is true at least in theory, although it’s remarkable that nobody has ever complained and are unlikely to do so in the foreseeable future. In S3, data is stored across multiple devices across a minimum of three availability zones in a highly redundant manner. S3 is highly flexible to store any type of data. It is a key-based object store. S3 is based on a global infrastructure and using it is as simple as making standard https calls.With its ever-growing popularity and ease of use, it provides many features and is still growing and evolving to enable a large number of use cases related to data storage and management across the industry, big or small. But there is something that has everyone worried, namely, S3 usage costs. Costs are not a matter of worry on their own, given that S3 is indeed quite an inexpensive storage available on cloud. What bothers is the visibility into the various factors which affect cost calculations. These factors are too detailed to be taken lightly. In this article, we will take a scenario, neither a very happy one nor very complex, as an example to calculate usage costs, which hopefully will make us aware of its fine details.AWS S3 PricingWith incredibly wide scenarios of data storage and management support in S3, there are a limited but large number of technical features and usage criteria which have attendant costs. Given all these features, it becomes quite challenging to visualize and evaluate how costs add up. Typically, the cost of S3 is calculated based on the Data storage, S3 APIs usage, that is, Requests and data retrievals, Network Data Transfer, and the other S3 features for its data management or operations, i.e., Management and Replication. There are different data storage classes in S3. Storage pricing on these classes are different; pricing also varies for the various API calls for data usage like uploads, listing, downloads, or retrievals. S3 Pricing is listed at https://aws.amazon.com/s3/pricing/. We will refer to this page for all calculations.Data Scenario and AssumptionsWe have our S3 bucket in the Mumbai region. Suppose we upload 10240GB (10TB) of data per day every day until 31st March 2020, and a total number of 10 Million objects corresponding to the 10TB data per day. This data is uploaded to a S3 bucket from application servers. Also, data is uploaded to the Standard Storage Class in S3. This upload Operation uses an S3 Object PUT request starting 1st March 2020. We will assume that each object is larger than the S3 Intelligent Tier minimum object size criteria, i.e., 128KB. Also, we will assume that the objects in the S3 Intelligent Tier would be in the Frequent access Tier for first 30 days and in the Infrequent access Tier for the next 30 days. The next assumption is that new uploads on each day would occur at 00:00AM and that the new data uses the full 24hours on the day of the upload. In addition, we assume that this is a single AWS account billed individually, that it is not part of any AWS Organization with any other member accounts, and that the free tier coverage is over. We will use the S3 pricing page for all the price rates and units for calculations pertaining to the Mumbai region.The S3 Lifecycle Transition—moving data to reduce costsWe can create Lifecycle Transitions to move data to ensure lower storage cost. Here, we will create a Lifecycle Transition using the following steps:Keep data in S3 Standard for 30 days after uploading to S3 Standard.After 30 days of creation, move the data to the S3 Intelligent Tier.After 90 days of creation, move the data to the Glacier Deep Archive Tier.After 120 days of creation, delete the data. Now, without further theoretical talk, let’s begin the journey of cost calculations.An S3 Standard Cost for MarchCost of UploadsWe calculate the price of uploads, i.e., PUT API calls as shown below:The PUT per 1000 requests in S3 Standard = $0.005(A) So, the cost of Total PUTs = 10,000,000 * $0.005 / 1000 * 31 = $1550Storage costNow, we will calculate the price for the total data stored in S3 Standard for a month—from 1st March to 31st March. We can calculate data in terms of only GB-Month because data is moving to other tiers and each object would add to monthly pricing for only the amount of time it is stored in S3 Standard during the month. Accordingly, the price for data storage for this one-month period is as follows:Total number of days of storage = 31 daysPrice of data Storage in S3 Standard for the first 50 TB per month = $0.025 per GBPrice of data Storage in S3 Standard for the next 450 TB per month = $0.024 per GBSince data is changing every day, we will calculate the effective storage for March in terms of GB-Month. This calculation will signify that the changing data size, on a cumulative basis, was in total that much GB per month for the month of March. Notably, any new data uploaded to S3 Standard will remain there for 30 days. So, the calculation of cost for March would be as follows:Total GB-Month = (Data Uploaded on 1st March and be there till 30th March) + (Data Uploaded on 2nd March and be there till 31st March) + (Data Uploaded on 3rd March for its GB-Month till 31st March) + … + (Data Uploaded on 31st March)Hence, total GB-Month for March = (10240 GB * 30 days/31 days per month) + (10240 GB * (31-1) days / 31 days per month) + (10240GB * (31-2) days / 31 days per month) + … + (10240 * (31-30)) days / 31 days per month) = (10240 GB / 31 days per month) * (31 + (31-1) + (31-2) + … + (31-30)) daysThis works out to (10240/31) *((31*31) -(1+1+2+…+30))) GB-Month, and further simplified as (10240 * (961 – 466) / 31) GB-Month = (10240 * 495 / 31) GB-Month The final figure amounts to 163509.677 GB-Month, or 159.677TB-Month(B) The cost of storage for the month of March is 50 TB-Month at $0.025 per GB + 109.677 TB-Month at $0.024 per GB = 50 * 1024 * $0.025 + 109.677 * 1024 * $0.024, amounting to $3975.42Transition to Intelligent TierAfter 30 days from creation, data would be moved to the S3 Intelligent Tier and remain there for the next 60 days. For the first 30 days, it would be in Frequent Access Tier; then it will move to the Infrequent Access Tier for the remaining days. For simplicity, we assume that there is no access while data is stored in S3 IT. If some data is accessed frequently beyond the first 30 days in S3 IT, such data would remain in the Frequent Access Tier; but we would need to dig deeper to find out which data subsets have been accessed frequently and at what times; this is a difficult task. However, we can make estimations using the Bucket’s CloudWatch Storage Metrics Graph.Objects uploaded to S3 on 1st March would move to S3 IT on 31st March and remain there till 29thObjects uploaded to S3 on 2nd March would move to S3 IT on 1st April and remain there till 30thObjects uploaded to S3 on 3rd March would remain in S3 standard on 1st April; then they move to S3 IT on 2nd April and remain there till 31stObjects uploaded to S3 on 4th March would remain in S3 standard from 1st to 2nd April; these objects then move to S3 IT on 3rd April and remain there till 1stSimilarly, data uploaded to S3 on other days would move to S3 IT after 30 days. Hence, the date of this transition would fall somewhere in April. So, for days in April before this date, this data would be in S3 Standard. From this date onwards, the data would be in S3 IT and then remain there for a total of 60days. Finally, data uploaded to S3 on 31st March would be in S3 Standard from 1st April to 29th April, and then would move to S3 IT on 30th April; it will remain there till 29th June.The S3 Intelligent Tier Cost for MarchObjects uploaded to S3 on 1st March would move to S3 IT on 31st March and remain there till 29th May.Transition CostThe Lifecycle Transition Price to S3 IT is $0.01 per 1000 Lifecycle Transition requests (objects)(C) Thus, the cost of Transition to S3 IT on 31st March is 10,000,000 * $0.01 / 1000, which works out to $100Storage CostThe price for Storage in Frequent Access Tier, first 50 TB / Month is $0.025 per GB(D) Thus, the cost of data storage for 31st March is 10240GB * 1 day / 31 days per month * $0.025 per GB, which equates to (10240 * 0.025) / 31, totaling $8.258Monitoring and Automation CostFor Monitoring and Automation, all Storage / Month = $0.0025 per 1,000 objects(E) Monitoring and Automation Cost for March = 10,000,000 * $0.0025 / 1000 = $25 The S3 Standard Tier Cost for AprilStorage CostThe total GB-Month in April is 10TB Data on 1st April for 3rd March upload + 10TB Data on 1st and 2nd April for 4th March upload + … + 10TB Data from 1st to 29th April for 31st March upload.So, the actual calculations are as follow: (10TB * 1 day / 30 days per month) + (10TB * 2 days / 30 days per month) + … + (10TB * 29 days / 30 days per month) This equates to 10 * (1+2+…+29) / 30 TB-Month = 10 * 435 / 30 TB-Month, totaling 145 TB-Month (F) The cost of storage for Standard Tier in April = ((50 * 1024) GB * $0.025 per GB) + ((95 * 1024) GB * $0.024 per GB), which is $3614.72S3 Intelligent Tier Cost for AprilThe objects uploaded on 1st March transitioned to the S3 Intelligent Tier on 31st March; these objects will be in the S3 Intelligent Tier (S3 IT) in April from 1st April to 29th April in the Frequent access Tier; then in the Infrequent access Tier on 30th April. Objects uploaded from 2nd March to 31st March will get transitioned to S3 Intelligent Tier (S3 IT) in April from 1st April till 30th April and remain there accordingly for 60 days.Transition CostThe Lifecycle Transition price to S3 IT is $0.01 per 1000 Lifecycle Transition requests (objects)Total Objects Transitioned to S3 IT in April are 10,000,000 * 30, that is, 300,000,000(G) So, the cost for Lifecycle Transition in April is 300,000,000 * $0.01 / 1000, which amounts to $3000Storage CostThe total GB-Month of Data in the S3 IT for the month of April is ( (10TB * 29 days / 30 days per month) + (10TB * 30 days / 30 days per month) + (10TB * (30-1) days / 30 days per month) + (10TB * (30-2) days / 30 days per month) + … + (10TB * (30-29) days / 30 days per month) ) (Frequent access)+ (10TB * 1 day / 30 days per month) (Infrequent access)This translates to 10TB * (29 + 30 + (30 –1) + (30 –2) + … + (30-29) ) days / 30 days Per month + (10TB * 1 day / 30 days per month);that is, 10 * (29 + (30*30) - (1+2+…+29) ) / 30 TB-Month = 164.67 TB-Month (Frequent access) + 1/3 TB-Month (Infrequent access)(H) Thus, the cost for S3 IT storage = (50 * 1024 * $0.025) + (114.67 * 1024 * $0.024) + (1/3 * 1024 * $0.019), which works out to $4104.533Monitoring and Automation CostFor Monitoring and Automation, all Storage / Month is $0.0025 per 1,000 objects(I) So, the cost of S3 Intelligent Tier Management is ($0.0025 / 1000) * (( (10Million * 29 days / 30 days per month) + (10Million * 30 days / 30 days per month) + (10Million * (30-1) days / 30 days per month) + (10Million * (30-2) days / 30 days per month) + (10Million * (30-3) days / 30 days per month) + … + (10Million * (30-29) days / 30 days per month) ) (Frequent access) + (10Million * 1 day / 30 days per month)) (Infrequent access)) Hence, the actual calculations are($0.0025 / 1000) * (10Million * (29 + 30 + (30-1) + (30-2) + (30-3) + … + (30-29) + 1) days / 30 days per month),which translates to ($0.0025 / 1000) * (10Million * (30 + 30 + (30-1) + (30-2) + (30-3) + … + (30-29)) days / 30 days per month);that is, ($0.0025 / 1000) * (10Million * 495 days) / 30 days per month) = ($0.0025 / 1000) * 165MillionObject-month, amounting to $5S3 Intelligent Tier Cost for MayObjects uploaded on 1st March will get transitioned to the S3 Intelligent Tier (S3 IT) Infrequent access Tier on 30th April; they will remain there accordingly for 30 days, i.e., on 30th April and from 1st May until 29thObjects uploaded from 2nd March to 31st March will get transitioned to the S3 Intelligent Tier (S3 IT) in April—from 1st April till 30th They will be in the Frequent Tier for 30 days. Then these objects will be automatically moved to the S3 Intelligent Tier (S3 IT) Infrequent access Tier for another 30 days.Object transitioned to S3 IT on 1st April will be in the Infrequent access Tier from 1st May to 30thObject transitioned to S3 IT on 2nd April will be in the Frequent access Tier on 1st Then, it will be in the Infrequent access Tier from 2nd May to 31st May.Object transitioned to S3 IT on 3rd April will be in Frequent access Tier on 1st and 2nd Then, it will be in the Infrequent access Tier from 3rd May to 31st May and 1st June.Similarly, transition will occur for data uploaded on other dates.Object transitioned to S3 IT on 30th April will be in the Frequent access Tier from 1st May up to 29th Then, it will be in the Infrequent access Tier from 30th May to 31st May and from 1st June to 28th June.Transition CostSince there is no transition from the S3 standard to the S3 Intelligent Tier in May, and S3 Intelligent Tier’s sub-Tiers (i.e., Frequent/Infrequent Access Tiers) don’t count for Transition, there is zero cost of Lifecycle transition this month.Storage CostFor Frequent Access Tier,Total GB-Month of Data in S3 IT for month of May = (10TB * 1 day / 31 days per month) + (10TB * 2 days / 31 days per month) + (10TB * 3 days / 31 days per month) + … + (10TB * 29 days / 31 days per month) (Frequent access);this translates to 10TB * (1+2+3+…+29) days / 31 days per month;that is, 10 * (1+2+3+…+29) / 31 TB-Month, resulting in 140.322 TB-Month(J) So, the cost for S3 IT storage (Frequent access) = (50 * 1024 * $0.025) + (90.322 * 1024 * $0.024), which is $3499.753472 For Infrequent Access Tier,The total GB-Month of Data in S3 IT for month of May is (10TB * 29 days / 31 days per month) + (10TB * 30 days / 31 days per month) + (10TB * (31-1) days / 31 days per month) + (10TB * (31-2) days / 31 days per month) + … + (10TB * (31-29) days / 31 days per month) (Infrequent access);this would be 10TB * (29 + 30) days / 31 days per month + 10TB * (31*29 - (1+2+3+…+29)) days / 31 days per month;which equates to, 10 * (29 + 30 + 31*29 - (1+2+3+…+29)) / 31 TB-Month, i.e, 168.7096 TB-Month(K) The cost for S3 IT storage (Infrequent access) is then 168.7096 * 1024 * $0.019, totaling $3282.41398Monitoring and Automation CostMonitoring and Automation, All Storage / Month is $0.0025 per 1,000 objects(L) The cost of S3 Intelligent Tier Management is ($0.0025 / 1000) * (Frequent Access Tier: (10Million * 1 day / 31 days per month) + (10Million * 2 days / 31 days per month) + (10Million * 3 days / 31 days per month) + … + (10Million * 29 days / 31 days per month) + Infrequent Access Tier: (10Million * 29 days / 31 days per month) + (10Million * 30 days / 31 days per month) + (10Million * (31-1) days / 31 days per month) + (10Million * (31-2) days / 31 days per month) + … + (10Million * (31-29) days / 31 days per month)); this would translate to ($0.0025 / 1000) * (10Million * (1+2+3+…+29) days / 31 days per month +10Million * (29+30+(31-1) +(31-2) +…+(31-29)) days / 31 days per month); that is, ($0.0025 / 1000) * ((10 * ((1+2+3+…+29) + (29 + 30 + (29 * 31) - (1+2+3+…+29)))/ 31) MillionObject-Month); further, ($0.0025 / 1000) * ((10 * (29 + 30 + (29 * 31)) / 31) MillionObject-Month); thus, ($0.0025 / 1000) * (309.032258065 MillionObject-Month), amounting to $7725.80645S3 Glacier Deep Archive Tier Cost for MayObjects uploaded to S3 will finally get transitioned to the S3 Glacier Deep Archive Tier after 90 days of creation and remain there accordingly for the remaining period of 30 days before being completely deleted from S3. Object uploaded on March 1st will get transitioned to the Deep Archive on May 30th and remain there on 30th and 31st May; subsequently, it will remain in the Deep Archive from 1st June for the remaining 30 days of the month. Similarly, the object uploaded on March 2nd will get transitioned to the Deep Archive on May 31st and remain there on 31st May; then, from 1st June, it will remain for 30 days in Deep Archive.Transition Cost The price of Lifecycle Transition Requests per 1000 requests is $0.07(M) Thus, the cost of Transition to the Deep Archive in May is 2 * 10,000,000 * $0.07 / 1000, that is, $1400Storage CostThe storage Price for objects in Deep Archive is affected by 3 factors:The total data size storedThe total S3 standard index size @ 8KB per objectThe total S3 Glacier Deep Archive index size @ 32KB per objectCalculations are as follows:The total Object Count transitioned to Deep Archive in May is 20,000,000So, the total Data Size = 2 * 10TBThe total TB-Month of data size in May is (10TB * 2 days / 31 days per month) + (10TB * 1 day / 31 days per month) + (32KB * 10,000,000 * 2 days / 31 days per month) + (8KB * 10,000,000 * 1 day / 31 days per month) (S3 Standard);that is, (990.967741935 GB-Month + 9.84438 GB-Month) (Deep Archive) + (2.461095 GB-Month) (S3 Standard);so, 1000.81212194 GB-Month (Deep Archive) + (2.461095 GB-Month) (S3 Standard)The Deep Archive Storage Price = All Storage / Month is $0.002 per GBThe S3 Standard Price = First 50 TB / Month is $0.025 per GB(N) Thus, the cost of Storage in Deep Archive in May = 1000.81212194 * $0.002 + 2.461095 * $0.025, totaling $2.06315S3 Intelligent Tier Cost for JuneObject transitioned to S3 IT on 2nd April will be in Frequent access Tier up to 1st Then, in Infrequent access Tier from 2nd May to 31st May.Object transitioned to S3 IT on 3rd April will be in Frequent access Tier up to 2nd Then, in Infrequent access Tier from 3rd May to 31st May and 1st June.Similarly, object transitioned to S3 IT on 4th April will be in Frequent access Tier on 1st, 2nd and 3rd Then, in Infrequent access Tier from 4th May to 31st May and then on 1st and 2nd June.These considerations apply similarly to objects uploaded later in April or May.Object transitioned to S3 IT on 30th April will be in Frequent access Tier from 1st May up to 29th Then, in Infrequent access Tier from 30th May to 31st May; subsequently, from 1st June to 28th June.Storage CostThe total TB-Month in June can be calculated as (10TB * 1 day / 30 days per month) + (10TB * 2 days / 30 days per month) + (10TB * 3 days / 30 days per month) + … + (10TB * 28 days / 30 days per month);which is, 10 * (1+2+3+…+28) / 30 TB-Month, totaling 135.33333 TB-Month(O) Then, the cost for S3 IT storage (Infrequent access) is (10 * (1+2+3+…+28) / 30 * 1024 * $0.019), totaling $2633.04533S3 Glacier Deep Archive Tier Cost for the remaining periodRemember, objects will be transitioned to S3 Glacier Deep Archive after 90 days from creation and deleted after 120 days of creation.Price of Lifecycle Transition Requests per 1000 requests is $0.07Price of Lifecycle Transition Requests for Deletion is $0Deep Archive storage Price = All Storage / Month is $0.002 per GBThe S3 Standard Price = First 50 TB / Month is $0.025 per GB, next 450 TB / Month is $0.024The minimum storage duration in the Deep Archive is 180 days. So, any object in Deep Archive that is deleted before 180 days from the day of transition to Deep Archive will be charged as if it were stored for 180 days after transitioning there.Here is information from AWS on S3 Pricing:Objects that are archived to S3 Glacier and S3 Glacier Deep Archive have a minimum 90 days and 180 days of storage, respectively. Objects deleted before 90 days and 180 days incur a pro-rated charge equal to the storage charge for the remaining days. Objects that are deleted, overwritten, or transitioned to a different storage class before the minimum storage duration will incur the normal storage usage charge plus a pro-rated request charge for the remainder of the minimum storage duration. This applies to the current scenario; thus, we need to calculate cost for 180 days in Deep Archive, instead of 30 days.CalculationsObject transitioned to Deep Archive on 30th May will remain in Deep Archive in June from 1st up to 28thObject transitioned to Deep Archive on 31st May will remain in Deep Archive in June from 1st up to 29thObject transitioned to S3 IT Infrequent access Tier on 2nd May will be transitioned to Deep Archive on 1st Then, it will remain there up to 30th June.Object transitioned to S3 IT Infrequent access Tier on 3rd May will be transitioned to Deep Archive on 2nd Then, it will remain there up to 30th June and on 1st July.This logic is similarly applied to later objects.Object transitioned to S3 IT Infrequent access Tier on 30th May will be transitioned to the Deep Archive after 28th Then, it will remain there on 29th June and 30th June and from 1st July to 28th July.Transition CostThe price of Lifecycle Transition Requests per 1000 requests is $0.07(P) So, the cost of Transition to Deep Archive in June is 29 * 10,000,000 * $0.07 / 1000, which works out to $20300. Again, the minimum Deep Archive storage period is 180 days.Object Uploaded to S3 on 1st March has transitioned to Deep Archive on 30thThe month wise days it should have been in Deep Archive, if not deleted early, would be as follows – 30th May and 31st May; the full months of June, July, August, September, and October; then the days from 1st November until 25thObject Uploaded to S3 on 31st March has transitioned to Deep Archive on 29thMonth wise days it should have been in Deep Archive, if not deleted early, would be – 29th and 30th June; the full months of July, August, September, October, and November; then, the days from 1st December up to 25thAll data transitioned in between will fill up the middle months and days, in chronological order, between these two extremities.Storage CostAs stated earlier, there are 3 components for Deep Archive Storage:The total data size storedA total S3 standard index size of 8KB per object(For early deletion, I will be adding this component too. If anybody finds it incorrect, please provide your comment; I would be happy to correct)A total S3 Glacier Deep Archive index size of 32KB per object(For early deletion, I will be adding this component too. If anybody finds it incorrect, please provide your comment; I would be happy to correct) In case of Early deletion, as referred above from the Pricing page, storage cost would be equal to storage for 180 days from the date of transition to Deep Archive. There is also a pro-rated request charge for early deletion cases, but I am not able to find the pricing unit for that, so I will be skipping such request charges. (I will try to figure it out and update later, or if anybody has any information, please provide in a comment below; I would be happy to update and appreciate your inputs.)Deep Archive Storage Costs in JuneThe total TB-Month for Data Objects is calculated as (10TB * 30 days / 30 days per month) + (10TB * (30-1) days / 30 days per month) + (10TB * (30-2) days / 30 days per month) + … + (10TB * (30-27) days / 30 days per month) + (10TB * (30-28) days / 30 days per month);this equates to 10TB * ((30-0) + (30-1) + (30-2) + … + (30-27) + (30-28)) days / 30 days per month;that is, 10TB * (30*29 - (1+2+…+28)) days/ 30 days per month = 10 * ((30*29) - (14*29)) / 30 TB-Month;which amounts to 154.666666667 TB-MonthThe total GB-Month for Deep Archive Index Objects is (32KB * 10,000,000 * 30 days / 30 days per month) + (32KB * 10,000,000 * (30-1) days / 30 days per month) + (32KB * 10,000,000 * (30-2) days / 30 days per month) + … + (32KB * 10,000,000 * (30-27) days / 30 days per month) + (32KB * 10,000,000 * (30-28) days / 30 days per month);so, 32KB * 10,000,000 * ((30*29) - (14*29)) / 30 KB-Month, which amounts to 4720.05208333 GB-MonthThe total GB-Month for S3 Index Objects = (8KB * 10,000,000 * 30 days / 30 days per month) + (32KB * 10,000,000 * (30-1) days / 30 days per month) + (8KB * 10,000,000 * (30-2) days / 30 days per month) + … + (8KB * 10,000,000 * (30-27) days / 30 days per month) + (8KB * 10,000,000 * (30-28) days / 30 days per month);so, 8KB * 10,000,000 * ((30*29) - (14*29)) / 30 KB-Month, totaling 1180.01302083 GB-Month(Q) Thus, the total Costs of Storage in June are ((154.666666667 * 1024) + 4720.05208333) * $0.002 + (1180.01302083 * $0.025); this amounts to $355.697763Deep Archive Storage Costs in JulyThe total TB-Month for Data Objects is 31 * (10TB * 31 days / 31 days per month), amounting to 310 TB-MonthThe total GB-Month for Deep Archive Index Objects is 31 * (32KB * 10,000,000 * 31 days / 31 days per month), that is, 9460.44921875 GB-MonthThe total GB-Month for S3 Index Objects is 31 * (8KB * 10,000,000 * 31 days / 31 days per month), which, is 2365.11230469 GB-Month(R) So, total Costs of Storage in July = (((310 * 1024) + 9460.44921875) * $0.002) + (2365.11230469 * $0.025), totaling $712.928706Deep Archive Storage Costs in AugustThe total TB-Month for Data Objects is 31 * (10TB * 31 days / 31 days per month), that is, 310 TB-MonthThe total GB-Month for Deep Archive Index Objects is 31 * (32KB * 10,000,000 * 31 days / 31 days per month), which is 9460.44921875 GB-MonthThe total GB-Month for S3 Index Objects is 31 * (8KB * 10,000,000 * 31 days / 31 days per Month), amounting to 2365.11230469 GB-Month(S) So, the total Costs of Storage in August are calculated (((310 * 1024) + 9460.44921875) * $0.002) + (2365.11230469 * $0.025); this amounts to $712.928706Deep Archive Storage Costs in SeptemberThe total TB-Month for Data Objects is 31 * (10TB * 30 days / 30 days per month), that is, 310 TB-MonthThe total GB-Month for Deep Archive Index Objects is 31 * (32KB * 10,000,000 * 30 days / 30 days per month), which is 9460.44921875 GB-MonthThe total GB-Month for S3 Index Objects is 31 * (8KB * 10,000,000 * 30 days / 30 days per month), which equals 2365.11230469 GB-Month(T) So, the total Cost of Storage in September is (((310 * 1024) + 9460.44921875) * $0.002) + (2365.11230469 * $0.025), totaling $712.928706Deep Archive Storage Costs in OctoberThe total TB-Month for Data Objects is 31 * (10TB * 31 days / 31 days per month), that is, 310 TB-MonthThe total GB-Month for Deep Archive Index Objects is 31 * (32KB * 10,000,000 * 31 days / 31 days per month), amounting to 9460.44921875 GB-MonthThe Total GB-Month for S3 Index Objects id 31 * (8KB * 10,000,000 * 31 days / 31 days per month), which is 2365.11230469 GB-Month(U) So, the total Cost of Storage in October is (((310 * 1024) + 9460.44921875) * $0.002) + (2365.11230469 * $0.025); the total for October is $712.928706Deep Archive Storage Costs in NovemberThe total TB-Month for Data Objects = 31 * (10TB * 25 days / 30 days per month) + 30 * (10TB * 1 day / 30 days per month) + 29 * (10TB * 1 day / 30 days per month) + 28 * (10TB * 1 day / 30 days per month) + 27 * (10TB * 1 day / 30 days per month) + 26 * (10TB * 1 day / 30 days per month);that is, 10 * ((31*25) + 30 + 29 + 28 + 27 + 26) / 30 TB-Month, which equals 305 TB-Month
How CIOs can mitigate Cloud Outage risk?
Cloud Outage is real, it does happen. The best way to mitigate is you plan for it. This includes design, solution, implementation and routine DR drills. 1. What can cause a cloud outage?All major cloud service providers (CSPs) build redundancy into their cloud offerings and promise 99.9% uptime. However, enterprises must still be prepared for cloud outages. Many factors can cause a cloud outage, including network issues, power outages, natural disasters, and DDoS or supply chain attacks. Software bugs, misconfiguration errors, and service interruptions during scheduled or unscheduled maintenance can also cause outages.2. What sort of damage can a cloud outage cause?Businesses require constant access to their cloud workloads, applications, and resources to maintain operational continuity. An outage can result in downtime that results in missed sales or aborted transactions. The business might suffer huge losses, especially if the outage resulted from a cyberattack or impacted service delivery for an extended period. Disruptions can also damage their reputation and increase customer churn.3. How frequent are cloud outages?Most major CSPs promise 99.9% uptime. Nonetheless, cloud outages do happen and organizations should guard against them. There has been a slight uptick in the number of major outages in the past three years. For nearly 40% of organizations, human error causes a major outage. Further, 85% of these incidents resulted from users not following proper procedures or from flaws in their processes and procedures.4. What can CIOs do to guard against cloud outages?A redundant multi-cloud environment that spreads workloads across multiple locations can reduce an organization’s vulnerability to cloud outages. They must also adopt robust security tools, regularly test systems for cloud failures, and modernize the business continuity infrastructure. A business continuity and disaster recovery plan can also boost preparation and help bring things back to normal in case of a catastrophic event.5. What can CIOs do to recover from cloud outages?To recover from an outage, organizations must take regular backups of all workloads. CIOs must carefully choose the backup and recovery solution because it is essential to recover quickly from a cloud outage. The solution will help restore services from backups, deliver fast RPO and RTO, and reduce the costs of downtime.6. What types of organizations are most vulnerable to a cloud outage?Companies in many industries rely on the cloud to develop customer-facing applications and software, to maintain data backups, and for disaster recovery. They also use the cloud to manage data, enable remote work, prevent fraud, and leverage predictive analytics. The number of use cases for cloud computing are constantly increasing so an outage can affect any organization that uses the cloud.7. Why is DR Drill so important?After preparing the disaster recovery (DR) plan, companies should test the plan to ensure that it will work in real-world situations. Successful organizations perform DR drill tests simulating live scenarios. These drills often reveal security and backup issues that must be fixed to avoid the damaging effects of a future outage. Testers must test all apps, perform server updates, and modernize the infrastructure as appropriate.
CVE Scan using AWS Inspector
About AWS InspectorAWS Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities and deviations from best practices. After performing an assessment, it produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API. Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions. These rules are regularly updated by AWS security researchers.What are the different Assessment Types?The Network Assessment evaluates the EC2 instance protections for Internet ‘visible’ ports, i.e. for ingress from points outside the VPC. This type of assessment cannot examine the EC2 instance by itself, unless the optional agent is installed. The Host Assessment is significantly more thorough as it evaluates the EC2 instances for vulnerable software (CVE), systems hardening (CIS) and security best practices. The agent can be installed using the AWS Systems Manager for enterprise-scale (formerly EC2 Systems Manager or SSM), or manually on each instance.The Amazon Inspector NomenclatureAmazon Inspector agent: Inspector agents are installed on the EC2 instances. These agents collect the data associated with installed software and send it to AWS Inspector services. Note: This will not find any vulnerable codes if your application is infected. Assessment target: A set of EC2 instances that you want to assess for vulnerability. Targets can be identified by unique tags. Rules and Rules package: Checks are performed on the IT resources based on certain rules. In the context of Amazon Inspector, a rule is a security check that Amazon Inspector performs during the assessment run.Network ReachabilityCommon vulnerabilities and exposuresCentre for Internet Security (CIS) BenchmarksSecurity best practices for Amazon InspectorFindings: Findings are the potential security issues discovered by the Inspector. Findings are displayed on the Amazon Inspector console or fetched through the API.How to install the Inspector AgentAgents collect the data(behavioral and configurational), and pass them on to Amazon Inspector for further analysis. Installation of an agent in Linux is a very simple process. As of this writing, agent installation using the Systems Manager Run Command is not currently supported for the Debian operating system. To use this option, make sure that your EC2 instance has the SSM Agent installed and has an IAM role that allows Run Command. The SSM Agent is installed by default on Amazon EC2 Windows instances and Amazon Linux instances. Amazon EC2 Systems Manager requires an IAM role for EC2 instances that process commands and a separate role for users executing commands. Download agent from the following paths: Linux based:wget https://inspector-agent.amazonaws.com/linux/latest/installOrcurl -O https://inspector-agent.amazonaws.com/linux/latest/install Windows based: https://inspector-agent.amazonaws.com/windows/installer/latest/AWSAgentInstall.exe To install, run: $ sudo bash installConfigure Amazon InspectorStep 1: Click on Get startedStep 2: You can leave the default options checked and click on any run options from below as per your requirement. For this walkthrough, we have opted for Advanced setup.Step 3: Here, Amazon Inspector has an option to run on all the Instances that are present in your account and region. If you want to run for a standalone instance or a specific set of instances, use EC2 tags to segregate them. We can also install Inspector agents using SSM from this window for all instances. As a prerequisite, make sure that SSM agents are already installed and EC2 has the appropriate IAM rights for the same. For Standalone/tag based assessment, run:Step 4: Define Rules packages. By default, certain packages are selected. If needed, you can remove the rules by clicking the ‘X’ mark. This window also gives a provision to set a schedule for further recurring scans. Step 5: Once verified, click on Create to start the first assessment run. Once done, you will get a success message as shown below.Success Message:Step 6: You can verify the assessment run by clicking Assessment templates from the navigation option.Step 7: After about an hour, you should be able to see the findings under the Findings option. You can also segregate findings based on severity. Dashboard ViewYou can also get a consolidated view from the Dashboard option.PricingAmazon Inspector is a “pay for what you use” service like the vast majority of those provided by AWS. Amazon Inspector is free for up to 250 agents for the first 90 days. After 90 days, the pricing changes. Please refer here for details. Possible scenario: Suppose you have 10 Amazon EC2 instances in your assessment target with the Inspector Agent installed on each instance In this example, you would be billed for 10 host agent-assessments and 10 network reachability instance-assessments. The Amazon Inspector charges for your account, for this billing period would be: For host assessment rules packages: 10 agent-assessments @ $0.30 per agent-assessment For network reachability rules package: 10 instance-assessments @ $0.15 per instance-assessment When you add them up, the Amazon Inspector bill would be $3.00 for host agent-assessments and $1.50 for network reachability instance-assessments for a total of $4.50.
Simplify your AWS Account Audit using AWS CloudTrail
About AWS CloudTrailCloudTrail provides a comprehensive event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This enables governance, compliance, operational auditing, and risk auditing of your AWS account.Enabling CloudTrailAWS CloudTrail is now enabled by default for ALL CUSTOMERS and will provide visibility into the past seven days of account activity without the need for you to configure a trail in the service to get started. If you want to access your CloudTrail log files directly or archive your logs for auditing purposes, you can still create a trail and specify the S3 bucket for your log file delivery. Creating a trail also allows you to deliver events to CloudWatch Logs and CloudWatch Events. Please follow the below steps:Step 1: Go to your CloudTrail service console under the Services dropdown. Step 2: Click on Trails from the left navigation and then click onCreate trail. Step 3: Enter the trail name. You can enable features based on your requirements or stick to default will also serve your basic needs. Below are a few non-default features we enabled along with an explanation of our reasons. Next, select an existing S3 bucket or AWS will create a new bucket for you to save the logs. A) Apply trail to my organization: If you have multiple AWS accounts, this feature will help you get everything in one single place. B) Insights events: Insight events helps you track unusual call volumes of write management APIs. Say, your account keys are compromised and a hacker is trying to launch multiple instances. This type of write operation can be tracked by Insight. Do keep in mind that CloudTrail Insights events are charged at $0.35 per 100,000 write management events analyzed so you may end up paying a more than you expected. Please visit the pricing model page here to verify if you have the budget for this. C) Data events: Data events records resource operations performed on or within a resource, for S3 and lambda. E.g.: S3 GetObject, PutObject can be tracked based on individual buckets.PricingIn CloudTrail, you can view, filter and download the most recent 90 days of your account activity for all management events in supported AWS services, free of charge. Please refer to this page here for more on pricing details.
Cost Optimization Using Launch Template in AWS
“When It comes to cloud computing then, Amazon Web Services is well known for offering substantial solutions for any IT company. You don’t need to have any hardware and it gives the opportunity to focus directly on a product rather than infrastructure, maintenance, or upgrades. But using this solution in the absence of planning can be very dangerous. It can be very expensive, making constant cost optimization a critical necessity. Let us discuss the cost optimization techniques for an EC2 instance.” tStrategic planning while selecting an instance class.The first and foremost rule to use EC2 instance is to identify the application demand and its purpose. AWS offers a number of different EC2 instance types based on the desired uses. Eg. General purpose, Compute-optimized, Memory-optimized, storage-optimized et cetera. First, analyze your application and make sure that what is the requirement in terms of capacity else you will end up paying for over-provisioned resources. tImplementation of Autoscaling for those instances:AWS EC2 auto-scaling monitors application and scale if needed to provide the best performance at the lowest possible cost. There are can be several metrics that can be used to scale EC2 up or down (Memory, CPU etc).[caption id="" align="aligncenter" width="614"]  AWS EC2 Autoscaling Management Console[/caption]While creating auto-scaling group we get two options through which we can start the proceedings of Auto-Scaling group creation. tLaunch Configuration tLaunch TemplateLaunch Configuration is an instance configuration template that an auto-scaling group uses to launch EC2 instances. In the launch configuration, we specify the AMI, Instance type, purchasing option, key pair, et cetera. We can specify our launch configuration with multiple ASG. However, we can specify only one launch configuration for an ASG at a time, you can’t modify a launch configuration after you’ve created. To change the launch configuration you must create one and update that with ASG..     .     .Launch Template: An eye-catching feature that recently came into the discussion is Launch Template. A launch template is similar to Launch Configuration, in that it has specified instance configuration information. Included are the ID of the AMI, the instance type, a key pair, security groups and the other parameters that we specify while launching an EC2 instance. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template with versioning. We can create a subset of the full set of parameters and then reuse it to create other templates or template versions. tEarlier, the ASG used to grow and shrink in response to changing conditions, now adding to it, it will also have a combination of EC2 instance types and pricing models defined by us. It gives us the full control of instance types that will be used to build our group, along with the ability to control the mix of on-demand and spot instances. While configuring the ASG with launch template it gives the option to select the fleet composition. Fleet composition works with two options. tIn the first option, it will directly adhere to the launch template. It means, whatever we specify in the Launch template, either it is purchasing option or instance types, it will directly follow that. tThe second option is to combine the purchasing option and instance types in one fleet. We need to specify the instance types depending upon the pricing strategy and resource requirement. The order in which we specify the instance type will also decide the launch priority of the instances one we specify. Once we are done with that, it asks for the instance distribution strategy. It specifies the allocation of on-demand and spot instances, the maximum price for spot instances we willing to pay, and the strategy for diversifying spot instances across the instance type specified. Thereafter, we specify the desired number of instances then It gives us the freedom to define a baseline of on-demand instances. Baseline instances mean it will keep the count of that much on-demand instances and thereafter it will decide to depend upon the specified value(in terms of percentage) how much on-demand and how much spot instance need to be launched over the baseline to fulfil the desired capacity.[caption id="" align="aligncenter" width="736"] An Example to Understand the Scaling Behaviour[/caption]Q. How Autoscaling will behave if no spot instance is present within the specified bid price?A. Let us understand a scenario where we need 12 Instances in an ASG. We defined On-demand base is equal to 2. Now the On-demand percentage above the base is 30% and the rest 70% are Spot Instances. Our specified bidding price is also very less and hence there is no spot instance falling inside that bid, how the autoscaling will behave? In such cases, autoscaling will launch 2 On-demand base instances, 3 (i.e 30% of 10 Instances) more On-demand instances as we specified above the base and will keep on trying with the spot instances, if it finds any instance within that range then it will launch else it will keep on trying until it succeeds.Q. How much cost can be saved by implementing this Launch Template with ASG?A: Let us take a use case, where we have an ASG with (Min = 4, Max= 10, Desired = 4). The previously defined instance class in launch configuration is t2.xlarge, Now let us check to how much savings can be made using the various permutations and combinations in that.Scenario 1:All Ondemand Instances are present.Number of Instances in ASG = 4;Instance Capacity = t2.xlarge (4 vCPU and 16GiB memory)In such cases the calculation will look like this:Price of t2.xlarge = $0.1984/hr = 0.1984 x 24 x 30= $142.9/month.Total cost for 4 instances = $142.9 x 4 = $572/monthScenario 2:All reserved instances are present.Number of instances in ASG = 4;Instance Capacity = t2.xlarge (4 vCPU and 16GiB memory)Use of Reserved Instances:Price for t2.xlarge reserved instances = $0.118/hr = 0.118 x 24 x 30 = $84.96/monthTotal cost for 4 instances = $84.96x4 = $339.84/monthSo, a total saving of $572-$339.84 = $232.16/monthScenario 3:A combination of reserved instances and Spot Instance of t2.xlarge capacity is present.Number of instances in ASG = 4;Number of Reserved Instance = 2; Number of Spot Instance = 2Price of two t2.xlarge instances = 0.118x24x30x2 = $169.92Bidding price of t2.xlarge spot instance = $0.06/hrCalculation for a month = 0.06x24x30 = $86.4Calculation for 2 spot instances = $86.4 x 2 = $172.8Total Cost = $ 172.8 + $ 169.92 = $256.32Tatal Savings from Ondemand = $572 - 256.32 = $315.68Scenario 4:Highly Available plus cost Optimized fleetTo make our application highly available, let us make use of fleet composition:Let us specify the instance types like this:Base Instances = 2 reserved instanceOther instance compositiont2.xlarge (Spot price = $0.06/hr)t3.xlarge (Spot price = $0.0538/hr)M5.xlarge(Spot price = $0.078/hr)Let us take an extreme case where no instances of t2.xlarge capacity are available, so is with t3.xlarge) hence the ASG launched the instances of M5.xlarge capacity.The calculation will look like this:Price of two t2.xlarge instances = 0.118x24x30x2 = $169.92Bidding price of M4.xlarge spot instance = $0.078/hrCalculation for 2 spot instances = $0.078x24x30 x 2 = $112.32Total Cost = $ 169.92 + $ 112.32 = $282.24Tatal Savings from Ondemand = $572 - 282.24 = $289.76Conclusion: So, here in this fleet composition although we are using the instances of another family with diverse instance capacity still we are saving almost 52–54% of the cost that we were spending over on-demand.           Cost Comparison with various combinationsQ. What combination can help us in achieving highly available and cost-optimized ASG?A. Using a combination of reserved instances along with the spot instances and defining the instances of diverse types with almost near computing capacity (eg. another instance of almost same CPU or Memory or near to that from a different family) will make sure that the maximum chances of the spot instances will be there. If in case, one instance of the t2 class is not under the budget then it will go for t3, m4 and so on, causing the maximum chances of getting the spot instances. Sometimes the cost for the better resource may be less than the one with a lesser capacity in case of spot pricing model.Q. How difficult it is to migrate from Launch Configuration to Launch template in an ASG?A. It is very simple, just create a launch template by directly selecting the option (Copy as a launch template) in the launch configuration window. Go and update the ASG by selecting the option of launch instance using from Launch Configuration to Launch template and specifying the various purchasing option followed by the fleet combination.Q. Where else we can use Launch template?A. Launch template can be used while:i) Launching an EC2 instance.ii) Creating and Autoscaling groupiii) In a spot fleet.iv) In an EC2 fleet.Q. What is EC2 fleet and how it is different from spot fleet?A. Spot fleet —i) Created through the console, API, AWS CLI.ii) Can use spot instances but can keep an optional baseline of on-demand instances if selected.iii) Uses automatic scaling for scaling procedure (Such as step, scheduled, and target tracking scaling)iv) Only operates in 1 Availability zone.v) Primary capacity handled by less expensive spot instances.B. EC2 Fleet —i) Only created through API and AWS CLIii) Can use on-demand, spot and reserved instances.iii) Can’t span regions or subnets, need a separate fleet for each region. Can span multi AZs.iv) EC2 instance themselves are configured through fleet configurations.v) Manages the scaling based on configuration and pricing.Q. How EC2 fleet is different from ASG fleet combinations and its use cases?A. EC2 fleet’s underlying tech uses Application Auto Scaling. This suite is generic in nature so that the scaling feature can be implemented across other AWS services (eg. ECS, Spot, DynamoDB). The missing features are dependent on the service (EC2 in this case) to fill in any feature gaps that Application Auto Scaling has.The EC2 auto-scaling option is designed just for EC2s and has a more robust suite and more features.EC2 Auto Scaling, however, is strictly for the EC2 service only. This means it has its own API suite and is more robust compared to Application Auto Scaling. An example of something available in EC2 Auto Scaling which Application Auto Scaling doesn’t have is EC2 health checks to monitor and replace if the instance ever becomes unhealthy.
AWS CLOUDFORMATION All Together – Part 1
AWS Cloud Formation has so many reference on Internet and many of them are very helpful. I have tried to compile some of the information scattered all across in a single blog.AWS Cloud Formation has so many reference on Internet and many of them are very helpful but what I found while going through them is that you have to go through 100s of them to consolidate a single template. So finally decided to put most of things here. Few important points worth considering while working on complex templates 1) What I learned is that do not try to create complete Stack from a single template because you will end up with a template with 2k lines or more and if the template has some error then it becomes hard to sort it out. So, it's better to divide the Stack in Logical Components then create Cloud Formation Template for them.eg: VPC , Subnets and NAT GatewayDB Subnet Group, RDS, Replica and CloudWatch Alarms, 2) Tags are very important as it helps in filtering the resources for Costs and other 3) Magic of Parameters : Always try to pass variables from Parameters. This helps to make the template Universal. For start let's create a simple template to launch an EC2 Instance. The template will be divided in 3 Parts. tParameters tResources tOutput t ParametersThe Values which you need to give while launching an Instance like VPC ID, AMI, Role, Subnet, Keypair name etc. Keeping these values in Parameters help in making the template universal.Tip: Always create your AWS Resources with proper tags. Leaving the tagging part for later is time consuming activity. Types:-"VPC" : {"Description" : "The VPC in which you want to Launch your EC2","Type" : "AWS::EC2::VPC::Id"}In Parameters Type, always specify “Existing AWS values” if present.Existing AWS values that are in the template user's account. You can specify the following AWS-specific types: Few examples are below. AWS::EC2::AvailabilityZone::NameAn Availability Zone, such as us-west-2a.AWS::EC2::Image::IdAn Amazon EC2 image ID, such as ami-ff527ecf. Note that the AWS CloudFormation console won't show a drop-down list of values for this parameter type.AWS::EC2::Instance::IdAn Amazon EC2 instance ID, such as i-1e731a32.AWS::EC2::KeyPair::KeyNameAn Amazon EC2 key pair name.AWS::EC2::SecurityGroup::GroupNameAn EC2-Classic or default VPC security group name, such as my-sg-abc.AWS::EC2::SecurityGroup::IdA security group ID, such as sg-a123fd85.AWS::EC2::Subnet::IdA subnet ID, such as subnet-123a351e.AWS::EC2::Volume::IdAn Amazon EBS volume ID, such as vol-3cdd3f56.AWS::EC2::VPC::IdA VPC ID, such as vpc-a123baa3.AWS::Route53::HostedZone::IdAn Amazon Route 53 hosted zone ID, such as Z23YXV4OVPL04A. t ResourcesThe AWS Components which we want to create from the template. So here I am creating EC2 Security Group and EC2 Instance. t OutputsThis the part where you can get the Ids or AWS Resources Values which are created from the template.So What's the use of this OUTPUTS? Output does not seems to be a important in simple stack creation but if use a Nested Template where 2nd Resource is dependent on the values from 1st Resource then this comes in picture. Finally here is my template which will create EC2 Security Group and Launch EC2 Instance.{ “AWSTemplateFormatVersion”:“2016-03-28”,“Description”:“This Template will create EC2 INSTANCE and Security group”,“Parameters”:{ “TagValue1”:{ “Description”:“The Project Name “, “Type”:“String” }, “TagValue2”:{ “Description”:“The Environment name”, “Type”:“String”, “AllowedValues”:[ “Development”, ”Staging”, ”Production” ] }, “TagValue3”:{ “Description”:“The EC2 Instance Name”, “Type”:“String” }, “TagValue4”:{ “Description”:“The Server Name”, “Type”:“String” }, “VPC”:{ “Description”:“The VPC in which you want to Launch your EC2”, “Type”: “AWS:: EC2:: VPC::Id” }, “AMI”:{ “Description”:“The AMI that you’ll use for your EC2”, “Type”: “AWS:: EC2:: Image::Id” }, “IAMROLE”:{ “Description”:“The IAM you’ll use for your EC2”, “Type”:“String” }, “Subnet”:{ “Description”:“The Subnet that you’ll use for your EC2”, “Type”: “AWS:: EC2:: Subnet::Id” }, “KeyPairName”:{ “Description”:“Name of an existing Amazon EC2 KeyPair for SSH access to the Web Server”, “Type”: “AWS:: EC2:: KeyPair::KeyName”, “Default”:“my-key” }, “InstanceClass”:{ “Description”:“EC2 instance type”, “Type”:“String”, “Default”:“t2.micro”, “AllowedValues”:[ “t2.micro”, ”t2.medium”, ”t2.small”, ”t2.large”, ”m4.large”, ”m4.xlarge”, ”m4.2xlarge”, “m4.4xlarge”, ”m4.10xlarge”, ”m3.medium”, ”m3.large”, ”m3.xlarge”, ”m3.2xlarge”, “c4.large”, ”c4.xlarge”, ”c4.2xlarge”, ”c4.4xlarge”, ”c4.8xlarge”, ”c3.large”, “c3.xlarge”, ”c3.2xlarge”, ”c3.4xlarge”, ”c3.8xlarge” ], “ConstraintDescription”:“must be a valid EC2 instance type.” }},“Resources”:{ “EC2SecurityGroup”:{ “Type”: “AWS:: EC2::SecurityGroup”, “Properties”:{ “GroupDescription”:“SecurityGroup”, “VpcId”:{ “Ref”:“VPC” }, “SecurityGroupIngress”:[ { “IpProtocol”:“tcp”, “FromPort”:“22”, “ToPort”:“22”, “CidrIp”:“0.0.0.0/32” }, { “IpProtocol”:“tcp”, “FromPort”:“80”, “ToPort”:“80”, “CidrIp”:“0.0.0.0/32” }, { “IpProtocol”:“tcp”, “FromPort”:“443”, “ToPort”:“443”, “CidrIp”:“0.0.0.0/32” } ] } }, “Ec2Instance”:{ “Type”: “AWS:: EC2::Instance”, “Properties”:{ “ImageId”:{ “Ref”:“AMI” }, “InstanceType”:{ “Ref”:“InstanceClass” }, “IamInstanceProfile”:{ “Ref”:“IAMROLE” }, “KeyName”:{ “Ref”:“KeyPairName” }, “SecurityGroupIds”:[ { “Ref”:“EC2SecurityGroup” } ], “SubnetId”:{ “Ref”:“Subnet” }, “Tags”:[ { “Key”:“Project”, “Value”:{ “Ref”:“TagValue1” } }, { “Key”:“Environment”, “Value”:{ “Ref”:“TagValue2” } }, { “Key”:“Name”, “Value”:{ “Ref”:“TagValue3” } }, { “Key”:“Server”, “Value”:{ “Ref”:“TagValue4” } } ], “Tenancy”:“default” } }},“Outputs”:{ “InstanceId”:{ “Description”:“InstanceId of the newly created EC2 instance”, “Value”:{ “Ref”:“Ec2Instance” } }, “AZ”:{ “Description”:“Availability Zone of the newly created EC2 instance”, “Value”:{ “Fn:: GetAtt”:[ “Ec2Instance”, “AvailabilityZone” ] } }, “PublicIP”:{ “Description”:“Public IP address of the newly created EC2 instance”, “Value”:{ “Fn:: GetAtt”:[ “Ec2Instance”, “PublicIp” ] } }, “PrivateIP”:{ “Description”:“Private IP address of the newly created EC2 instance”, “Value”:{ “Fn:: GetAtt”:[ “Ec2Instance”, “PrivateIp” ] } }}}From CLI:aws cloudformation create-stack --stack-name MY-FIRST-STACK --template-body file:///file-path.json --parameters ParameterKey=AMI,ParameterValue=ami-xxx ParameterKey=IAMROLE,ParameterValue=my-role ParameterKey=InstanceClass,ParameterValue=t2.micro ParameterKey=KeyPairName,ParameterValue=my-key ParameterKey=Subnet,ParameterValue=sunrt-xxxxx ParameterKey=VPC,ParameterValue=vpc-xxxx ParameterKey=TagValue1,ParameterValue=MyProject ParameterKey=TagValue2,ParameterValue=Developmet ParameterKey=TagValue3,ParameterValue=MyEc2 ParameterKey=TagValue4,ParameterValue=WebServer

1 From 4

Subscribe to Our Blogs

Subscribe to our blogs and be the first to know about innovations in the field of cloud storage

Ready to discuss your cloud project?
Have questions?

Get In Touch

Only a competent AWS Consulting Partner will understand your unique needs and goals. The smart, enterprise-ready cloud solutions from Axcess.io can make life easier for your organization.



© 2025 All rights reserved

Terms of Service|Privacy Policies