Increasing Resiliency With Load Balancers

More and more organizations are building applications on Cloud Run, a fully managed compute platform that lets you run containerized applications on top of Google’s infrastructure. Think web applications, real-time dashboards, APIs, microservices, batch data processing, testing and monitoring tools, data science inference models, and more. Today, we’re excited to announce that it’s easier than ever to build internal apps on Cloud Run.  In this post, we’ll introduce three common design patterns and what’s new in Cloud Run to help implement these patterns.

  1. Internal Web Apps – enabled by the GA launch of Identity-Aware Proxy
  2. Internal APIs – enabled by the GA Launch of Regional Internal HTTP(S) Load Balancer 
  3. Microservices Spanning a Shared VPC – enabled by the Public Preview launch of  Shared VPC Ingress

1. Internal Web Apps

https://storage.googleapis.com/gweb-cloudblog-publish/images/1_InternalLoadBalancer_FqcvBmD.max-800x800.jpg

A common use-case many customers have is to build internal web applications that are accessible only to employees. Previously, you had to leverage a VPN and custom authentication flows, which wasn’t ideal. Now, you can simplify your login experience and get centralized access control with Identity-Aware-Proxy (IAP) support for Cloud Run, which is generally available. 

Identity-Aware Proxy helps you move towards Zero Trust principles by providing secure access to applications running on Google Cloud or other cloud platforms, using OAuth 2.0 and OpenID Connect standards to authenticate and authorize users. If you have a web application running on Cloud Run, you can now use IAP to authorize access to your app based on the client’s user identity and context. This architecture simplifies the login experience and gives security administrators centralized access control over the company’s internal web apps.  Security-sensitive organizations also have the option to upgrade to BeyondCorp Enterprise with IAP to enable full context-aware access, including user and device context, to Cloud Run applications.

What’s unique about this integration is that the IAP service itself can now authenticate to Cloud Run’s built-in IAM system. Specifically, when IAP forwards a request to Cloud Run, it will include in the X-Serverless-Authorization header an OpenID Connect ID token for its own service account. On the Cloud Run side, you no longer have to give “allUsers” the Cloud Run Invoker role, which was a blocker for those customers using the Domain Restricted Sharing organization policy. Now at GA, you only have to give IAP’s service account the Cloud Run Invoker role.

2. Internal APIs

https://storage.googleapis.com/gweb-cloudblog-publish/images/2_IAPDiagram_OFY9tFL.max-1100x1100.jpg

Cloud Run is also a great choice for building APIs because it’s easy, secure, and cost-effective.  For public-facing APIs, you could use the external HTTP(S) load balancer, which supports custom domains, advanced traffic management, and many security features. For internal-facing APIs you can use the regional internal HTTP(S) load balancer, which is also now generally available. Regional internal HTTP(S) load balancers bring the best features of the external HTTP(S) load balancer to internal workloads, including those that span multiple projects.

Unlike external load balancers, an internal HTTP(S) load balancer can only be accessed from other resources on the VPC. This architecture keeps all traffic within the VPC, and clients no longer have to call Cloud Run services via their public IPs.  To further lock down access, set ingress as internal, so traffic from the public internet cannot reach your Cloud Run service.

3. Microservices Spanning a Shared VPC

https://storage.googleapis.com/gweb-cloudblog-publish/images/3_SharedVPCIngressDiagram_wQfMlee.max-1100x1100.jpg

If you have a microservices model spanning a Shared VPC, you may be looking for an architecture that allows easy direct service-to-service calls while ensuring all traffic stays within your private network.

We’re happy to announce that Shared VPC ingress for Cloud Run is now in Public Preview. This launch streamlines the setup needed to use Cloud Run with a Shared VPC. Previously, if you restricted Cloud Run ingress to “internal”, then traffic from service projects in a Shared VPC would be blocked. With this launch, a Cloud Run service revision will now accept requests from the Shared VPC network that it is connected to, including when ingress is configured as “Internal” or “Internal and Cloud Load Balancing.”  

Cloud Run and Shared VPC can help your organization build and deploy applications more quickly and easily. You’ll benefit from centralized network administration, improved security and increased scalability.

Next steps

We know that security is a top concern. That’s why we’ve published a security guide to help you configure the security of your applications on Cloud Run. The guide covers Cloud Run’s internal architecture, how your data is handled and features you can leverage to meet your security requirements. 

And if you’re looking for an easy way to integrate these architectures into your system via infrastructure as code, we’ve published a set of evolving Terraform blueprints.

By Rachel Tsao Product Manager, Serverless | Xiaowen Xin, Product Manager
Source Google Cloud

Source: Cyberpogo



For enquiries, product placements, sponsorships, and collaborations, connect with us at hello@globalcloudplatforms.com. We'd love to hear from you!


Our humans need coffee too! Your support is highly appreciated, thank you!

Total
0
Shares
Previous Article

Announcing Cloud Storage FUSE And GKE CSI Driver For AI/ML Workloads

Next Article

Buffer HTTP Requests With Cloud Tasks

Related Posts