This is the third and final article in our series of posts where we demonstrate how you can upgrade your existing Docker CE environment to Docker EE without having to redeploy running services and applications.
3 Sales Communication Tips for Independent Consultants
How to Get Out of The Way of Yourself in 3 Steps
Upgrading Docker CE TO EE for the Impatient -Part II
Upgrading Docker CE to EE for the Impatient -Part 1
In this series of posts, we’ll demonstrate how you can upgrade your existing Docker CE environment to Docker EE without having to redeploy running services and applications. We’ll start with a set of servers running Docker CE engine using Docker Swarm, upgrade the engine to Docker EE, and then in part II install UCP and implement DTR in part III — the two major enterprise tools that are built on top of the Docker EE Engine.
3 Key Professional Relationship Lessons for Independent Consultants
3 Problems IT Leaders Solve with Container Technology
Stone Door Group® Releases Ansible Migration Accelerator(SM) to Automate and Consolidate IT Infrastructure
3 Reasons to Upgrade from Docker Community Edition to Enterprise Edition
Organizations that are using Docker CE are now trying to figure out how to scale their Docker environment to meet the security and compliance requirements for enterprise production. In this article we’ll look at 3 reasons why Docker EE is a natural upgrade path from CE. We have also written a 3 part tutorial series for administrators to implement a Docker EE upgrade.
3 Reasons Why Ansible is Replacing Homegrown Automation
Top 3 Financial Considerations for Independent Consultants
The Top 3 Considerations when becoming an Independent IT Consultant
A Practical Blockchain Example for Supply Chain | Part II
A Practical Blockchain Example for Supply Chain | Part I
CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer | Part III
CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer | Part II
CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer
This post is the first in a three part series on Continuous Deployment and Continuous Integration with Jenkins, containers, and microservices covering installation, general configuration that works for most use cases, and finally some advanced techniques that demonstrate some of the possibilities Jenkins provides for an enterprise CI/CD environment.
Google Cloud Architecture for the Impatient | Part III
Google Cloud Architecture for the Impatient | Part II
This is part 2 of a 3 part primer for IT professionals in a hurry. We will be discussing the minimum products and requirements for architecting on Google Cloud Platform.
In the first part of this series, we discussed a few of the infrastructure components available in GCP. Now, let’s discuss some of the products that augment this infrastructure. These solutions act as force-multipliers, which allow you to get more done with fewer resources and systems administrators.
Scaling and High-Availability
Most enterprise level applications have a need for multiple copies of a service to run concurrently. This may be to handle more workload by simply having more instances of the service (horizontal scaling), as opposed to vertical scaling by increasing the vCPUs or RAM of the server. Having multiple copies of a service in separate locations is valuable in case one copy crashes or an accident destroys a physical host. To scale in a highly available manner, where the end user will not experience an interruption if on instance goes down, requires a single endpoint that redirects user traffic to healthy instances. GCP provides tools to automatically scale the number of instances of a service and to direct traffic across them.
Instance Groups
Instance groups allow you to bundle your VMs together for load-balancing and manageability purposes. They can also be configured with an instance template that allows the instance group to automatically create additional instances if demand increases (autoscaling) or an existing VM crashes (autohealing).
Load Balancers
If a natural disaster disrupts a data center or a hardware failure fries a rack of servers, a load-balancer can detect any unhealthy or absent VMs and direct customer traffic to a healthy instance without intervention. Since GCP networks operate at a global scale, this could mean the users that are normally directed to the Sydney data center are temporarily directed to Singapore instead. GCP makes this automatic via anycast IPs that route a user to the nearest VM which can satisfy the request. This means a cloud architect no longer needs to design a routing solution to handle users from different regions. For example, instead of having users in Australia visit www.example.com.au, there can be a single domain such as www.example.com that all users in the world can use.
Auto-Scaling
An instance group can be set to automatically scale based on any metric, such as CPU usage or number of connections. This is enabled just by checking a box stating that you want auto-scaling and providing the target values for certain metrics. This alleviates the need for a systems administrator to wake up to a late-night page in order to provision another machine to handle an unexpected increase in workload.
The autoscaler will also delete VMs when workload decreases. This could be a huge savings for applications with long lulls in workload as you will not pay for VMs to sit idle. A common case is a website targeted to a particular region, say a state government website, where most traffic will be during waking hours for that given region.
Automated Infrastructure
Every Google product has a REST API allowing for automating provisioning, maintenance, and monitoring tasks. These APIs can be explored via the APIs Explorer.
Deployment Manager
Deployment Manager is a hosted tool that allows you to define the entire infrastructure needed by your application in template files. This allows you to version control your infrastructure definition. More importantly, it enables exact clones of your infrastructure to be deployed multiple times. There are numerous other benefits to defining your Infrastructure-as-Code.
Perhaps you would like multiple environments, such as Development, Staging, and Production, for your application in order to promote the latest versions of your application code. Deployment Manager allows you to define the infrastructure once but deploy to each of these environments. Your workflow could be to promote an application version from one environment to the next.
First, deploy both the infrastructure and application to Development whenever there is a new version. Then, when that version has been tested and blessed, deploy the same version of the infrastructure and application to Staging. Finally, deploy the version to Production. It is likely that any infrastructure misconfigurations will be identified in the earlier environments as each environment will be deploying near identical infrastructure.
The practice of defining your infrastructure in this way has many other benefits as well. Instead of creating complicated change management requests when a new piece of infrastructure is required or configuration needs to change; developers, administrators, or operators can make the change themselves to the code repository that defines the infrastructure. Then these changes could be peer-reviewed, merged, and then the infrastructure is updated automatically. This also promotes the coordination between development and operations (DevOps) by removing barriers between the teams and processes.
Cloud Launcher
Cloud Launcher builds on Deployment Manager by allowing various third parties to upload their infrastructure definitions to a kind of market place. For example, you can spin up an entire WordPress site by clicking a button in Cloud Launcher. This will provision the required VMs and storage and then configure the software — all of which is defined in a Deployment Manager template for you.
While deploying to the cloud removes the burden of managing hardware, these automation tools further simplify the initial and ongoing management of infrastructure. Google takes this yet another step forward by providing a number of Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) solutions that can ease the responsibility of the systems administrator and give more power to the developer.
In the final part of this series, we will discuss the fully managed services Google provides, which allows you to get work done without ever being concerned with infrastructure.
About the Author
John Libertine is a Google Certified Architect and VMWare Certified Professional that specializes in hybrid cloud infrastructure consulting and training for Stone Door Group, a DevOps solutions integrator who specializes in helping companies execute on their digital transformation initiatives. To learn more, drop us a line at letsdothis@stonedoorgorup.com.