Google Cloud Platform Hosting is a powerful tool for businesses and individuals alike. It offers a range of services including computing, storage, and networking.
To get started with Google Cloud Platform Hosting, you'll need to sign up for a Google Cloud account. This is a straightforward process that can be completed online.
Google Cloud offers a free tier of service that allows you to try out its features without spending a dime. This is a great way to get a feel for the platform before committing to a paid plan.
GCP Pricing
Google Cloud Platform follows a pay-as-you-go model, so you only pay for the resources you use.
You can save money with committed use discounts on Compute Engine resources like instance types or GPUs, which can yield more than 50% discounts.
Google Cloud Pricing Calculator is a helpful tool to estimate the pricing of prospective cloud deployments.
Per second billing is available for virtual machines through Compute Engine and several other services, which can help you avoid charges for rounding up to greater units of time.
Compute Engine provides automatically applied use discounts for running virtual machines for a big portion of the billing month.
You can also choose custom virtual machine types to fine-tune the sizes and tailor your pricing for your workloads.
Google Cloud Platform offers a generous free tier with limited usage of various services, allowing you to explore and experiment with GCP without incurring charges.
The free tier typically includes a certain amount of usage for services like Compute Engine, App Engine, Cloud Storage, BigQuery, and more.
GCP's flexible pricing model means you pay only for the resources you use, making it cost-effective for varying workloads.
Automatic discounts based on sustained use of resources can reduce costs for consistent workloads.
The free tier offers enough resources to host small websites or test new projects without incurring costs.
Competitors and Comparison
Google Cloud faces strong competition from other public cloud providers, particularly AWS and Microsoft Azure. These three major public clouds have been evolving to offer similar suites of services and capabilities.
AWS is the oldest and most mature public cloud, emerging in 2006 and possessing the largest market share. It appeals to a broad customer base ranging from individual developers to major enterprises to government agencies.
Microsoft Azure, on the other hand, has proven particularly attractive to Microsoft-based environments, making it easier to transition workloads from data centers to Azure. Azure is the second-largest public cloud and often caters to larger enterprise users.
Google Cloud, which also emerged in 2010, has developed a strong reputation for its compute, network, big data, and machine learning/AI services. However, it's currently the smallest of the three major public clouds.
Here's a brief comparison of the three major public clouds:
- AWS: The oldest and most mature public cloud, offering the broadest range of general tools and services.
- Microsoft Azure: Attractive to Microsoft-based environments, with a strong focus on enterprise-grade services.
- Google Cloud: Strong reputation for its compute, network, big data, and machine learning/AI services.
Cloud adopters should carefully investigate and experiment with the suite of services provided by each cloud provider before committing to a particular platform.
Certification and Learning
Google Cloud Platform offers a range of training options to help you get started with their services, including cloud infrastructure, application development, and machine learning.
Google Cloud certification paths are designed to validate your expertise on a professional level, and there are currently three levels of certification: Foundational, Associate, and Professional.
The Foundational certification is perfect for new or non-technical cloud users, while the Associate certification is geared towards Cloud Engineer roles and focuses on deployment, monitoring, and maintenance of workloads running in Google Cloud.
To achieve a Professional certification, you'll need at least three years of industry experience, including one year of hands-on experience with Google Cloud.
Here's a breakdown of the three levels of certification:
Google Cloud Platform also offers a range of certifications to validate your skills, including Associate Cloud Engineer, Professional Cloud Architect, and Professional Cloud DevOps Engineer.
These certifications are designed to test your knowledge and skills in specific areas, such as deployment automation and scaling applications at sudden loads.
Environment Setup and Configuration
To set up your environment for Google Cloud Platform hosting, you'll need to create a new project or reuse an existing one in the Cloud Console. Remember to note down the project ID, a unique name across all Google Cloud projects, which will be referred to later in the process.
You'll also need to enable billing in the Cloud Console to use Google Cloud resources. Don't worry, running through this codelab shouldn't cost much, and new users are eligible for the $300 USD Free Trial program.
To configure your environment, you'll need to create a Cloud Storage bucket and copy the startup script into it. You can also choose a variety of different zones for your project, and for more information, see Regions & Zones.
Here's a quick rundown of the steps:
- Sign in to Cloud Console and create a new project or reuse an existing one.
- Enable billing in Cloud Console.
- Create a Cloud Storage bucket and copy the startup script into it.
- Choose a zone for your project.
Environment Setup
To set up your environment, start by creating a Cloud Storage bucket. You can do this by copying the created startup-script.sh file into the bucket. Remember to replace [BUCKET_NAME] with the actual name of your Cloud Storage bucket.
To sign in to Cloud Console, you'll need to create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one. This will give you a unique project ID, which you'll need later.
You'll also need to enable billing in Cloud Console to use Google Cloud resources. Don't worry, running through this codelab shouldn't cost much, if anything at all. Be sure to follow any instructions in the "Cleaning up" section to shut down resources and avoid incurring billing beyond this tutorial.
To create a managed instance group, you'll need to create two instance templates. These templates will be used to configure the instances for the frontend and backend microservices. You can create the instance templates from the existing instances you've created.
To enable Compute Engine API, you'll need to accept the terms of service and billing responsibility. You can do this by executing a command in Cloud Shell.
To create a Compute Engine instance, you'll need to create a startup script. This script will be used to configure the instance when it's started. You can create the startup script by navigating to the monolith-to-microservices folder and creating a new file called startup-script.sh.
Here are the steps to create a Compute Engine instance:
1. Create a startup script to configure instances.
2. Clone source code and upload it to Cloud Storage.
3. Deploy a Compute Engine instance to host the backend microservices.
4. Reconfigure the frontend code to utilize the backend microservices instance.
5. Deploy a Compute Engine instance to host the frontend microservice.
6. Configure the network to allow communication.
To configure the network, you'll need to create firewall rules to allow access to the frontend and backend instances. You can do this by using the tags assigned during instance creation.
To update the configuration, you'll need to update the .env file to point to the new static IP address of the load balancer. You can do this by editing the .env file in the react-app folder.
Shell
In Cloud Shell, you can access a command line environment running in the Cloud. This Debian-based virtual machine is loaded with all the development tools you'll need.
To activate Cloud Shell from the Cloud Console, simply click Activate Cloud Shell. It should only take a few moments to provision and connect to the environment.
Once connected to Cloud Shell, you'll see that you are already authenticated and that the project is already set to your PROJECT_ID. If the project is not set, issue the following command to set it.
Looking for your PROJECT_ID? Check out what ID you used in the setup steps or look it up in the Cloud Console dashboard:
Cloud Shell also sets some environment variables by default, which may be useful as you run future commands. The default zone and project configuration need to be set.
To set the default zone and project configuration, run the following command. At the Cloud Shell command prompt, run the initial build of the code to allow the app to run locally.
Managed Group
Creating managed instance groups is a crucial step in setting up a scalable and available environment. You'll create two managed instance groups, one for the frontend and one for the backend, each with two instances.
To create a managed instance group, you'll use the instance templates as a foundation, which should be created from the source instances. This will allow you to define the machine type, boot disk image or container image, network, and other instance properties to use when creating new virtual machine (VM) instances.
Managed instance groups maintain high availability of your apps by proactively keeping your instances available, that is, in the RUNNING state. This is a significant advantage over static configurations that don't adapt to changing loads.
A managed instance group contains identical instances that you can manage as a single entity in a single zone. This enables you to easily scale your application and manage your instances as a single unit.
To allow your application to scale, managed instance groups will be created and will use the frontend and backend instances as instance templates. This will provide autohealing, load balancing, autoscaling, and rolling updates for your application.
Named ports are key:value pair metadata representing the service name and the port that it's running on. You'll specify named ports to identify the frontend microservice running on port 8080 and the backend microservices running on ports 8081 for orders and port 8082 for products.
Clean Up
To avoid unexpected recurring charges, delete the project after completing the codelab. This will delete the load balancer, instances, templates, and more created during the process.
Deleting the project requires executing a command in Cloud Shell, where you need to enter your project ID, not just the project name.
Confirm deletion by entering "Y" when prompted.
Regions and Zones
Regions and Zones are the basic building blocks of Google Cloud Platform, and understanding them is crucial for setting up and configuring your environment.
A Zone is an area where Google Cloud Platform Resources like virtual machines or storage are deployed. You can launch a virtual machine in a zone you specify, like Europe-west2-a.
Zones are grouped into Regions, which are independent geographic areas and much larger than zones. For example, all zones in the Europe-west2 region are grouped together.
Regions are independent geographic areas, and you can choose what regions you want your GCP resources to be placed in. This helps you bring your applications closer to users around the world.
Locations within regions usually have trip network latencies of under five milliseconds, making them ideal for fast data transfer. This is especially important for applications that require low latency.
Spreading resources across multiple zones in a region helps protect against unexpected failures. This is a key aspect of developing a fault-tolerant application.
You can run resources in different regions too, not just different zones. This helps guard against the loss of a whole region, say, due to a natural disaster.
Google Cloud Storage is one example of a service that supports deploying resources in a Multi-Region. This means that data is stored redundantly in a minimum of two different geographic locations, separated by at least 160 kilometers.
Deploying and Scaling
Deploying and scaling on Google Cloud Platform (GCP) is a breeze. You can deploy a backend instance, like the orders and products microservices, using an f1-micro instance configured with your startup script and tagged as a backend instance.
To scale your compute engine, you can create an autoscaling policy based on utilization, automatically adding instances when the load balancer is higher than 60% utilization and removing instances when it's lower than 60%. This is done using the Cloud Shell to create an autoscaler on the managed instance groups.
GCP provides robust virtual machines through its Compute Engine service, offering flexible configurations and seamless scaling with Kubernetes Engine.
Implementation Path
Deploying and scaling your project requires a solid implementation path.
To start, you'll need to install the Firebase CLI, which makes it easy to set up a new Hosting project, run a local development server, and deploy content.
Set up a project directory by adding your static assets to a local project directory, then run firebase init to connect the directory to a Firebase project.
You can also set up Cloud Functions or Cloud Run for your dynamic content and microservices in this project directory.
Viewing, testing, and sharing your changes before going live is optional, but you can do so by running firebase emulators:start to emulate Hosting and your backend project resources at a locally hosted URL.
To view and share your changes at a temporary preview URL, run firebase hosting:channel:deploy to create and deploy to a preview channel. You can also set up the GitHub integration for easy iterations of your previewed content.
Once things are looking good, deploy your site by running firebase deploy to upload the latest snapshot to our servers. If you need to undo the deploy, you can roll back with just one click in the Firebase console.
Linking your site to a Firebase Web App is optional, but it allows you to use Google Analytics to collect usage and behavior data for your app and use Firebase Performance Monitoring to gain insight into the performance characteristics of your app.
Compute Engine
Compute Engine is a powerful tool for deploying and scaling your applications. It allows you to provision virtual machines with the required RAM, ROM, and security groups.
You can deploy Compute Engine instances in various configurations, such as f1-micro instances, which are suitable for small applications. To create an f1-micro instance, you'll need to execute a command in Cloud Shell, specifying the startup script and tagging it as a backend instance.
Compute Engine also offers flexible scaling options, including autoscaling policies based on utilization. This means you can automatically add or remove instances as needed to maintain optimal performance.
GCP's Compute Engine service provides robust virtual machines that can be tailored to suit different types of workloads. You can create managed instance groups, each with multiple instances, and configure them to scale automatically based on load.
To create an autoscaling policy, you'll need to execute a series of commands in Cloud Shell, which will create an autoscaler on the managed instance groups. This will automatically add instances when the load balancer is higher than 60% utilization and remove instances when the load balancer is lower than 60% utilization.
Here are some key features of Compute Engine:
- Provision virtual machines with required RAM, ROM, and security groups
- Create managed instance groups with multiple instances
- Configure autoscaling policies based on utilization
- Deploy virtual machines in various configurations, such as f1-micro instances
By leveraging Compute Engine's powerful features, you can deploy and scale your applications with ease, ensuring optimal performance and cost-effectiveness.
Simulate Failure
To simulate failure, log into an instance and stop the services. To find an instance name, execute the following command.
You can secure shell into one of the instances, where INSTANCE_NAME is one of the instances from the list.
In the instance, use supervisorctl to stop the app. This will help you confirm that the health check works as expected.
Sources
- https://www.techtarget.com/searchcloudcomputing/definition/Google-Cloud-Platform
- https://codelabs.developers.google.com/codelabs/cloud-webapp-hosting-gce
- https://www.geeksforgeeks.org/google-cloud-platform-gcp/
- https://firebase.google.com/docs/hosting
- https://xcloud.host/set-up-your-google-cloud-platform-server-with-xcloud/
Featured Images: pexels.com