Cost Optimisation on AWS: How We Reduced a Client's Bill by 40%

5 min read
AWS, Cost Optimisation, Cloud
Share

A professional services firm came to us spending approximately 8,500 pounds per month on AWS, a figure that had grown steadily over three years without corresponding growth in usage. Their team knew the bill was too high but lacked the time and AWS expertise to identify specific savings. After a four-week analysis and optimisation engagement, we reduced their monthly spend to approximately 5,100 pounds, a 40% reduction with no impact on performance or availability.

The analysis started with AWS Cost Explorer and the Trusted Advisor cost optimisation checks. Within the first day, we identified three development and staging environments running 24/7 that were only used during business hours. Scheduling these to run twelve hours per day, five days per week, saved approximately 900 pounds per month. This is the most common waste pattern we see: non-production environments left running continuously.

The most expensive cloud resource is the one nobody knows is running.

Right-sizing was the second largest opportunity. We used CloudWatch metrics to identify EC2 instances with sustained CPU utilisation below 15%. Several instances had been provisioned as m5.xlarge when t3.medium would have been more than sufficient. We also found an RDS instance running db.r5.2xlarge for a database that rarely exceeded 4GB of RAM. Downgrading to db.r5.large saved 480 pounds per month with no measurable performance change.

The Optimisation Approach

The third category was architectural. The client was using a NAT Gateway for all outbound internet traffic from private subnets, costing approximately 350 pounds per month in data processing charges. By adding VPC endpoints for S3 and DynamoDB, which are free, we eliminated the majority of the NAT Gateway data charges. We also moved infrequently accessed S3 data to S3 Glacier Instant Retrieval, saving 60% on storage costs for 2TB of archived content.

  • Schedule non-production environments to run only during business hours
  • Right-size instances based on actual CloudWatch utilisation metrics
  • Use VPC endpoints for S3 and DynamoDB to reduce NAT Gateway costs
  • Implement S3 lifecycle policies to move infrequently accessed data to cheaper tiers
  • Purchase Savings Plans for stable, predictable workloads
  • Review costs monthly and set up billing alerts for unexpected increases

Finally, we purchased Savings Plans for the baseline compute that runs continuously. A one-year compute Savings Plan at the all-upfront rate provided a 35% discount on the remaining EC2 and Fargate usage. Combined with the right-sizing and scheduling changes, this brought the total monthly spend to 5,100 pounds. The entire engagement paid for itself within the first month of savings.

Want to Chat?

Contact our friendly team for quick and helpful answers.

Contact us