The Cloud Native Application Architecture Nanodegree - Foundations
Written by Nikos Vaggalis   
Monday, 14 June 2021
Article Index
The Cloud Native Application Architecture Nanodegree - Foundations
Insider's Guide To Foundations

 

udacityLogoNew

If you have followed my previous write-up of Udacity courses, such as the three-part Insider's Guide to the Java Web Developer Nanodegree, you'll know that I am happy to share my experiences. Having attended all the lessons of the Foundation, this is my take on them:

Lesson 1: Welcome to Cloud Native Fundamentals provided a quick and high level overview of the course.The whole purpose of the Cloud Native scheme is for businesses to become more responsive to customer feedback and more flexible in adapting to new and emerging technologies.

Lesson 2: Architecture Consideration for Cloud Native Applications went through the differences between the two prevalent architectures of building applications for the Web,
Monolith and Microservices, detailing the pros and cons of each approach. It also detailed the best development practices that pertain to Microservices:health checks, collecting metrics, writing to logs, tracing and monitoring resource consumption.

Lesson 3:Container Orchestration with Kubernetes.
So far there was just talk, a lot of talk and it's time to get practical. As such in this lesson we get to dockerize a simple hello-world Python application which we then deploy on a single Cluster/Node on our local machine with the help of Kubernetes. After that we had a deep look into the Kubernetes ecosystem and its building blocks, the resources: 

  • Pods - the atomic element within a cluster to manage an application
  • Deployments & ReplicaSets - oversees a set of pods for the same application
  • Services & Ingress - ensures connectivity and reachability to pods
  • Configmaps & Secrets - pass configuration to pods
  • Namespaces - provides a logical separation between multiple applications and their resources
  • Custom Resource Definition (CRD) - extends Kubernetes API to support custom resources 

These resources are instantiated and explored through the kubernets cli,kubectl.

At the end of the lesson we forgo the manual kubectl command line processing and instead deploy the resources declaratively through the Kubernetes Manifests. Manifests in essence are yaml configuration files and can be considered analogous to the functionality of Docker compose vs docker cli.

Lesson 4:Open Source PaaS
In this lesson we examine PaaS as a solution for businesses that do not possess the resources to go on-premises.As such these kind of businesses find it a natural move to delegate the management of their platform components to a 3rd party.

Another case for adopting PaaS is that despite that Kubernetes have distinct advantages,managing them at scale is not easy especially when hosting region-specific clusters of multiple nodes.In this scenario it is more suitable to to delegate platform management to a PaaS.

The rest of the lesson revolved around the pros as well trade offs of hosting on-premises vs IaaS vs PaaS:  

  • On-premise - where an engineering team has full control over the platform, including the physical servers
  • IaaS or Infrastructure as a Service - where a team consumes compute, network, and storage resources from a vendor
  • PaaS or Platform as a Service - where the infrastructure is fully managed by a provider, and the team is focused on application deployment 

Put simply, if you want total control of your stack and of course have got the resources,then go with on-premise.When not needing to manage the Networking,Storage,Servers and Virtualization layers yourself you can relinquis more control to an IaaS solution, and when, on top of IaaS, you also want to outsource the Runtime and Middleware layers and just keep the Application development and Data layers to yourself, you should opt for PaaS.

However with PaaS you also get locked in to a vendor.To avoid that, the lesson introduces Cloud Foundry which is an open-source PaaS stand-alone software package that can be installed on any available infrastructure; private, public, or hybrid cloud.

Finally another option, that of Function as a Service is introduced, which alleviates the biggest problem that PaaS introduces; that an application is always online, up and running , consuming resources. If you are more cost savvy you can instead opt for FaaS which runs snippets of code only whenever there's demand.

Of course choosing between Kubernets,PaaS or FaaS depends on the given requirements.

Lesson 5: CI/CD with Cloud Native Tooling

Split into two logical sections, CI Continuous Integration and CD Continuous Deployment, first explores the collective of what Continuous Application Deployment is.

In the CI section we explore an example of utilizing GitHub 's actions to build, test, and package an application as a docker image.

Then in the Continuous Delivery section we learn that it is the process that takes place after CI and pushes the code to the end-users.However it is common practice to push the code through at least 3 environments: sandbox, staging, and then production/end-users.

To deploy our docker image to a Kubernetes cluster we use the ArgoCD tool and a complete ArgoCD walkthrough follows in deploying a Nginx application to a cluster.

While this was a simple deployment to the Sandbox environment by using manifests, when needing to push the image to Staging and Production which could have different configurations,the need for a configuration management system arises. As such,the rest of the lesson explores such a tool in Helm.

Finally the lesson completes with a comparison between Push and Pull based CI/CD models,which also signals the end of the Foundation course.

My impression after completing the course is purely positive. I've been introduced to cutting edge technology and architectures and through practical examples understood the Cloud Native stack and the reasons it constitutes the future of writing and deploying software applications.

I want to note too that Katie Gamanj, in charge of this first course, did a great job in calmly and clearly explaining the concepts in detail.

So is continuing on this learning path and signing up for the full Nanodegree worth it?  I would say a resounding yes, based on my experience and the cutting edge topics that follow in the rest of the program: 

  • Course 2: Message Passing which focuses on refactoring microservice capabilities from a monolithic architecture and employ different forms of message passing in microservices
  • Course 3: Observability which focuses on collect system performance data using Prometheus, how to collect application tracing data using Jaeger and how to visualize the results in a dashboard using Grafana
  • Course 4: Microservices Security which focuses on hardening a Docker and Kubernetes microservices architecture

My verdict is that Cloud Native Application Architecture is an excellent way to modernize skills in constructing edge applications with a view to looking for positions in datacenters or large innovative organizations which have deeply invested in such architectures - and of course, as part of any Nanodegree  Udacity offers services to build a convincing resumee.

cloudnat

Cloud Native Application Architecture is just one of the programs on offer from Udacity's School of Cloud Computing for more options see:

Udacity's School of Cloud Computing

New Udacity Cloud Nanodegree Programs

Azure, Azure Everywhere - New Developer Nanodegree

More Information

Cloud Native Application Architecture

Related Articles

Udacity Cloud Nanodegree Programs

Professional Credentials For Computer Science Careers  

The Insider's Guide to the Java Web Developer Nanodegree - 1

The Insider's Guide to the Java Web Developer Nanodegree - 2

The Insider's Guide to the Java Web Developer Nanodegree - 3

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Kotlin Ktor Improves Client-Server Support
04/11/2024

Kotlin Ktor 3 is now available with better performance and improvements including support for server-sent events and CSRF (Cross-Site Request Forgery) protection.



IBM Opensources AI Agents For GitHub Issues
14/11/2024

IBM is launching a new set of AI software engineering agents designed to autonomously resolve GitHub issues. The agents are being made available in an open-source licensing model.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info



Last Updated ( Tuesday, 15 June 2021 )