Azure offers a range of reference architectures to help you design and implement scalable and secure solutions. These architectures are based on proven patterns and best practices, and are designed to be adaptable to your specific needs.
A key aspect of Azure reference architecture is the use of microservices, which are smaller, independent services that communicate with each other to achieve a common goal. This approach allows for greater flexibility and scalability.
By using Azure reference architecture, you can create solutions that are more resilient and easier to maintain, with features like automated deployment and scaling. This can help reduce costs and improve overall efficiency.
Azure provides a range of tools and resources to support reference architecture design and implementation, including the Azure Architecture Center, which offers a wealth of information and guidance on designing and implementing scalable and secure solutions.
Architecture
Azure App Service architecture is a vital component of any Azure implementation. It connects directly to an Azure SQL Database.
Azure App Insights and Azure Monitor are also integral parts of this architecture, providing valuable insights and monitoring capabilities.
To ensure a solid foundation for your Azure implementation, it's essential to familiarize yourself with the application components and architecture. This includes reading through the pre-install checklist and the reliability and availability guidance.
Here are some general requirements and recommendations related to networking when running Tanzu Operations Manager on Azure:
- You must enable virtual network peering between hub and spoke resource groups.
- You must have separate subscriptions for each resource group.
- Use ExpressRoutes for a dedicated connection from an on-premises datacenter to VDC.
- Use a central firewall in the hub resource group and a network appliance.
- You can place network resources, such as DNS and NTP servers, either on-premise or on Azure Cloud.
Workflow
In the architecture of Azure App Service, the workflow is streamlined to ensure seamless interactions between the user, the app, and the underlying infrastructure. An HTTPS request to the default domain on azurewebsites.net is issued by the user, which automatically points to the app service's built-in public IP.
This TLS connection is established directly between the client and the app service, with the certificate managed completely by Azure. Easy Auth, a feature of Azure App Service, takes care of authenticating the user with Microsoft Entra ID, ensuring a secure and trusted experience.
Your application code deployed to App Service handles the request, connecting to an Azure SQL Database instance using a connection string configured in the App Service as an app setting. This is where your custom logic and business rules come into play, processing the request and generating a response.
The entire workflow is logged in Application Insights, providing valuable insights into the original request to App Service and the subsequent call to Azure SQL Database. This logging capability helps you monitor and troubleshoot your app's performance, identifying areas for improvement and optimization.
Component Interaction
The Load Balancer plays a crucial role in routing all traffic to the active Terraform Enterprise instance. This instance handles all requests to the Terraform Enterprise application.
The Terraform Enterprise application connects to the PostgreSQL database via an Azure provided database server name endpoint. This ensures that all database requests are routed to the highly available infrastructure supporting Azure Database for PostgreSQL.
The Load Balancer is responsible for directing traffic to the correct instance. This helps distribute the workload and ensures that no single instance is overwhelmed.
The Terraform Enterprise application also connects to object storage via the Azure Blob Storage endpoint for the defined container. This allows for secure and reliable storage of objects.
The highly available infrastructure supporting Azure Storage ensures that all object storage requests are routed to a reliable and scalable storage solution.
VMS on a Single Resource Group
If shared network resources don't exist in an Azure subscription, you can use a single resource group to deploy Tanzu Operations Manager and define network constructs.
Using a single resource group can simplify the deployment process and make it easier to manage network resources.
This approach is particularly useful when you're just starting out with Tanzu Operations Manager and want to keep things straightforward.
Architecture
When designing the architecture for HashiCorp Terraform Enterprise on Azure, it's essential to consider the minimum requirements for Terraform Enterprise Servers. These include a minimum of 4 core CPU, 16 GB RAM, and 50GB disk space, which can be achieved with the Standard_D4_v4 Azure VM size.
The CPU requirements can be scaled up to 8 core for more demanding implementations. This is a significant increase from the minimum, allowing for more concurrent tasks and improved performance.
The recommended Azure VM sizes for Terraform Enterprise Servers are the Standard_D4_v4 and Standard_D8_v4. These sizes provide a balance between cost and performance, making them suitable for most use cases.
Here's a summary of the recommended Azure VM sizes:
Required Reading
Before diving into architecture, it's essential to read through the pre-install checklist to familiarize yourself with the application components and architecture.
This checklist will give you a solid foundation to make informed decisions about hardware sizing and architectural decisions.
Reading the reliability and availability guidance is also crucial as a primer to understanding the recommendations in this reference architecture.
Cluster Architecture
Cluster architecture is a crucial aspect of deploying a compute cluster on Azure. It's created by a template that automatically sets up the necessary resources, including the MATLAB Job Scheduler.
The template uses Azure Resource Manager to create the resources, and for more information about each resource, you can check the Azure template reference. This reference provides detailed information on each component, ensuring you understand how they fit together.
A compute cluster typically consists of two main components: the head node and the worker nodes. The head node is a compute VM that serves as the central hub for the cluster, while the worker nodes are a scale set of VMs that perform the actual computations.
Here are the key components of a compute cluster:
- Head node VM (Microsoft.Compute/virtualMachines): A compute VM for the cluster head node, which includes the MATLAB install and stores the job database locally or on a separate data disk.
- Worker scaling set (Microsoft.Compute/virtualMachineScaleSets): A scale set for worker VMs to be deployed into, which communicate with the clients and head node using a secure SSL connection.
This architecture is designed to provide a scalable and secure way to deploy a compute cluster on Azure. By using a template and Azure Resource Manager, you can easily set up and manage your cluster, ensuring it meets your needs and provides the best possible performance.
Reliability
Reliability is a top priority when it comes to ensuring your application can meet its commitments to customers.
The architecture outlined in this example isn't designed for production deployments, which means some critical reliability features are omitted.
The App Service Plan is configured for the Standard tier, which doesn't have Azure availability zone support. This means the App Service becomes unavailable in the event of any issue with the instance, the rack, or the datacenter hosting the instance.
The Azure SQL Database is configured for the Basic tier, which doesn't support zone-redundancy. This means data isn't replicated across Azure availability zones, risking loss of committed data in the event of an outage.
Deployments to this architecture might result in downtime with application deployments, as most deployment techniques require all running instances to be restarted. Users may experience 503 errors during this process.
To prevent reliability issues due to lack of available compute resources, you'd need to overprovision to always run with enough compute to handle max concurrent capacity. Autoscaling is not enabled in this basic architecture.
Here are some reliability concerns to be aware of:
- The App Service Plan is configured for the Standard tier.
- The Azure SQL Database is configured for the Basic tier.
- Deployments to this architecture might result in downtime.
- Autoscaling is not enabled.
If you're looking to overcome these reliability concerns, see the reliability section in the Baseline highly available zone-redundant web application.
Security
In a production deployment, it's essential to implement network privacy to reduce the attack surface of your architecture. This can be achieved by using private networking, which is not included in this basic architecture.
Implementing private networking ensures several security features, including network isolation and segmentation. However, this is not a concern for a proof of concept phase.
The basic architecture also doesn't include a deployment of the Azure Web Application Firewall, leaving the web application unprotected against common exploits and vulnerabilities.
Consider using Azure Key Vault to store secrets such as the Azure SQL Server connection string, especially when moving to production. This provides increased governance and security.
While remote debugging and Kudu endpoints are fine to leave enabled during the development or proof of concept phase, it's essential to disable them when moving to production to reduce unnecessary control plane, deployment, or remote access.
Here are some key security considerations to keep in mind:
- Implement private networking for network isolation and segmentation
- Use Azure Key Vault to store secrets
- Disable remote debugging and Kudu endpoints in production
- Use managed identity for authentication and not store secrets in the connection string
- Enable Microsoft Defender for App Service in production
- Use the integrated authentication mechanism for App Service ("EasyAuth")
- Use managed identity for workload identities
Diagnostics and Monitoring
In the proof of concept phase, it's essential to get a clear understanding of what logs and metrics are available to be captured.
To achieve this, enable diagnostics logging for all items log sources, as this helps you understand what logs and metrics are provided out of the box and identify any gaps you'll need to close using a logging framework in your application code.
You should configure logging to use Azure Log Analytics, which provides a scalable platform to centralize logging that's easy to query.
Use Application Insights or another Application Performance Management (APM) tool to emit telemetry and logs to monitor application performance.
Here are some key considerations for monitoring in the proof of concept phase:
- Enable diagnostics logging for all items log sources.
- Configure logging to use Azure Log Analytics.
- Use Application Insights or another Application Performance Management (APM) tool.
In addition to these considerations, it's also important to note that you should eliminate log sources that are not adding value and are adding noise and cost to your workload's log sink when you move to production.
Deployment
Deployment is a crucial step in bringing your Azure App Service application to life. Automate the deployment process using Azure Pipelines to quickly and safely iterate on your application as you move toward production.
Start building your deployment logic in the Proof of Concept (PoC) phase to catch any issues early on. Implementing Continuous Integration and Continuous Deployment (CI/CD) early in the development process allows you to automate the deployment of your application.
Use ARM templates to deploy Azure resources and their dependencies. This is especially important to start in the PoC phase, so you can automatically deploy your infrastructure as you move toward production.
Different ARM templates can be used to create different environments, such as replicating production-like scenarios or load testing environments only when needed. This setup also helps save on cost.
Here are some key considerations for deployment:
- Follow the guidance in CI/CD for Azure Web Apps with Azure Pipelines.
- Use ARM templates to deploy Azure resources and their dependencies.
- Use different ARM templates to create different environments.
Containers
Azure App Service allows you to deploy supported code directly to Windows or Linux instances.
You can also use App Service as a container hosting platform to run your containerized web application.
App Service offers various built-in containers to support your web applications.
If you're using custom or multi-container apps, you may need to introduce a container registry to further fine-tune your runtime environment or support a code language not natively supported.
Deploying to a container registry gives you more flexibility and control over your application's environment.
Control Plane
As you dive into Azure App Service, it's essential to get familiar with the control plane. This is where you'll find the Kudu service, which exposes common deployment APIs like ZIP deployments.
The Kudu service is a game-changer for raw logs and environment variables. You can access these critical tools to troubleshoot and fine-tune your app.
If you're using containers, don't overlook Kudu's ability to open an SSH session to a container. This feature is a must-have for advanced debugging capabilities.
With the control plane, you'll have a solid foundation for deploying and managing your app on Azure App Service.
Sources
- https://learn.microsoft.com/en-us/azure/architecture/web-apps/app-service/architectures/basic-web-app
- https://www.networkbachelor.com/azure-iaas-reference-architecture-a-quick-overview/
- https://developer.hashicorp.com/terraform/enterprise/deploy/replicated/architecture/reference-architecture/azure
- https://docs.vmware.com/en/VMware-Tanzu-Operations-Manager/3.0/vmware-tanzu-ops-manager/refarch-azure-azure_ref_arch.html
- https://github.com/mathworks-ref-arch/matlab-parallel-server-on-azure
Featured Images: pexels.com