Learn Amazon Web Services in a Month of Lunches
Article Index
Learn Amazon Web Services in a Month of Lunches
Part 1 continued
Part 2
Part 3 and Conclusion

PART 2 - THE AWS POWER USER: OPTIMIZING YOUR INFRASTRUCTURE

Chapter 13 Keeping ahead of user demand

If all goes to plan, you’d expect your website usage to increase. This increase can bring potential problems. You’ll likely need additionally AWS resources/services, and a degree of system recovery. Luckily, resources can be added quickly and even for just key periods (e.g. a sales promotion).

The author briefly discusses how the impact of application failure can be significantly reduced by use of other server/resources in a different region (i.e. High Availability). Load balancing can distribute the workload more evenly between your server/resources, and auto scaling allows additional servers/resources to be started/dropped as use waxes and wanes.

Next, the advantages of cloud computing are outlined (e.g. scalable, on-demand provisioning, pay only for your usage), before discussing elasticity and scalability. The author suggests elasticity is the ability to respond to changes in resource usage, whereas scalability refers to the way a system is designed to meet changing demand. I suspect for most people, the terms are interchangeable.

This chapter provides a reminder of the cloud’s major advantages, with its ability to change in response to changing usage. The next few chapters go into more detail about High Availability.   

Chapter 14 High availability: working with AWS networking tools

If you create a copy of your system in another area (Availability Zone), it facilitates High Availability and prevents a systems failure disaster from being ruinous.

A Virtual Private Cloud (VPC) ties together your resources and their connectivity – this allows the group to be replicated for High Availability, system testing etc. The importance of network Access Control Lists, as a secondary line of security defence is noted.

Next, step-by-step walkthroughs are provided on creating a new VPC, using both the manual method and the much easier wizard. The chapter continues with a look at Availability Zones and network subnets, in the context of security and reliability. Subnets allow components of an application to be distributed into other Availability Zones, and the replication of the complete application. The chapter ends with a practical deployment of a website across two availability zones.

I note that some terms used before being defined, but are often defined later in the chapter (e.g. Network ACL, route table). Some terms are only defined at the end of the chapter (e.g. NAT - Network Address Translation). Occasionally, the author drops into a mode where he assumes too much from the reader.

There’s a helpful reminder that using NAT will incur charges since it lies outside the Free Tier usage. I’m not sure if this is a chapter on security or High Availability (it’s both, but not obviously so).

Chapter 15 High availability: load balancing

Now we turn to the practicalities of implementing High Availability. Specifically, we look at the Elastic Load Balancer (ELB), which can monitor for system health and redirect traffic to another instance in case of problems or additional workload.

There’s a step-by-step walkthrough on building a multizone ELB, involving: 

  • Create 4 EC2 instances in 2 different Availability Zones

  • Create a target group, and configure a health check

  • Create a load balancer linked to the 2 subnets hosting your instances

  • Create a security group for both instances and for load balancer

  • Associate your target group with the load balancer 

The remainder of the chapter shows how to test the cluster you’ve created.

Chapter 16 High availability: auto scaling

Load balancers allow for the more even distribution of resources. This chapter looks at another aspect of High Availability, namely auto-scaling, which can automatically respond to changes in demand (up or down) or system failure by adding or removing resources in a timely manner.

Auto-scaling involves creating a Launch Configuration which defines the resources you want the auto-scaler to use (e.g. EC2, AMI), together with an auto scaling group (this defines the how and when to scale). The chapter provides step-by-step instruction on how to create and apply these.

There’s a useful diversion into the different prices of EC2, namely: 

  • On-demand – you launch EC2, use it, shut it down, pay for usage

  • Spot – instances start/end depending on your pre-set price limits

  • Reserve – can reserve instance for up to 3 years (typically 50% price of on-demand) 

This chapter provided a practical walkthrough of auto-scaling, with plenty of helpful screenshots and discussions. There’s a useful point about closing down the scaling group, since closing down the EC2 instance would automatically start another instance (if auto-scaling is in place).

Chapter 17 High availability: content-delivery networks

Another aspect of High Availability is ensuring the user gets the data quickly. It’s noted that fastidious users will only wait for a few seconds before looking elsewhere. Content-Delivery Networks (CDN) can help improve the delivery times of your data.

The chapter opens with a look at Amazon’s CDN, CloudFront. CloudFront takes snapshots of your website and stores these in various remote locations, which can subsequently serve requests from these remote users – thus improving their local performance.

The practical part of the chapter provides step-by-step guidance on how to create a CloudFront distribution, distributing some of your websites content, and integrates into the cluster load balancer created in a previous chapter.

This chapter provides details of an interesting mechanism to improve performance - but is CDN truly a component of High Availability?



Last Updated ( Tuesday, 07 May 2019 )