Google Cloud | Networking

How To Use Packet Mirroring For IDS In different VPC Designs

When migrating from on-premises to the cloud, many Google Cloud customers want scalable solutions to detect and alert on higher-layer network anomalies, keeping the same level of network visibility they have on-prem. The answer may be to combine Packet Mirroring with an Intrusion Detection System (IDS) such as the open-source Suricata, or some other preferred threat detection system. This type of solution can provide the visibility you need in the cloud to detect malicious activity, alert, and perhaps even implement security measures to help prevent subsequent intrusions. 

However, design strategies for Packet Mirroring plus IDS can be confusing, considering the number of available VPC design options. For instance, there’s Google’s global VPCShared VPCs and VPC Peerings. In this blog, we’ll show you how to use Packet Mirroring and virtual IDS instances in a variety of VPC designs, so you can inspect network traffic while keeping the ability to use the supported VPC options that Google Cloud provides. 

Packet Mirroring basics

But first, let’s talk some more about Packet Mirroring, one of the key tools for security and network analysis in a Google Cloud networking environment. Packet Mirroring is functionally similar to a network tap or a span session in traditional networking: Packet Mirroring captures network traffic (ingress and egress) from select “mirrored sources,” copies the traffic, and forwards the copy to “collectors.” Packet Mirroring captures the full payload of each packet, not just the headers. Also, because Packet Mirroring is not based on any sampling period, you can use it for in-depth packet-level troubleshooting, security solutions, and application-layer network analysis.

Packet Mirroring relies on a “Packet Mirroring policy” with five attributes:

  1. Region
  2. VPC network(s)
  3. Mirrored source(s)
  4. Collector (destination)
  5. Mirrored traffic (filter)

Here’s a sample Packet Mirroring policy:

When creating a Packet Mirroring policy, consider these key points:

  • Mirrored sources and collectors must be in the same region, but can be in different zones—or even different VPCs or projects.
  • Collectors must be placed behind an Internal Load Balancer (ILB).
  • Mirrored traffic consumes additional bandwidth on the mirrored sources. Size your instances accordingly.
  • The collectors see network traffic at Layer 3 and above the same way that the mirrored VMs see the traffic. This includes any NATing and/or SSL decryption that may occur at a higher layer within Google Cloud.

There are two user roles that are especially relevant for creating and managing Packet Mirroring:

  • “compute.packetMirroringUser” – This role allows users rights to create, update, and delete Packet Mirroring policies. This role is required in the project where the Packet Mirroring Policy will live.
  • “<a href="https://cloud.google.com/compute/docs/access/iam#compute.packetMirroringAdmin" target="_blank" rel="noreferrer noopener">compute.packetMirroringAdmin</a>” – This role allows users to mirror the desired targets to collect their traffic. 

Using Packet Mirroring to power IDS

An IDS needs to see traffic to be able to inspect it. You can use Packet Mirroring to feed traffic to a group of IDSs; this approach has some significant benefits over other methods of steering traffic to an IDS instance. For example, some cloud-based IDS solutions require special software (i.e., an agent) to run on each source VM, and that agent duplicates and forwards traffic to the IDS. With Packet Mirroring, you don’t need to deploy any agents on VMs and traffic is mirrored to IDS in a cloud-native way. And while an agent-based solution is fully distributed and prevents network bottlenecks, it requires that the guest operating system support the software. Furthermore, with an agent-based solution, CPU utilization and network traffic on the VM will most certainly increase because the guest VM and its resources are tasked with duplicating traffic. High CPU utilization related to network throughput is a leading contributor to poor VM performance.

Another common approach is to place a virtual appliance “in-line” between the network source and destination. The benefit of this design is that the security appliance can act as an Intrusion Prevention System (IPS) and actually block or deny malicious traffic between networks. However, an in-line solution, where traffic is routed through security appliances, doesn’t capture east-west traffic within VMs in the same VPC. Because subnet routes are preferred in a VPC, in-line solutions which are fed traffic via static routes, can’t alert on intra-VPC traffic. Thus, a large portion of network traffic is left unanalyzed; a traditional in-line IDS/IPS solution only inspects traffic at a VPC or network boundary. 

Packet Mirroring solves both these problems. It doesn’t require any additional software on the VMs, it’s fully distributed across each mirrored VM, and traffic duplication happens transparently at the SDN layer. The Collector IDS is placed out-of-path behind a load balancer and receives both north-south traffic and east-west traffic.

Using Packet Mirroring in various VPC configurations

Packet Mirroring works across a number of VPC designs, including:

  • Single VPC with a single region
  • Single VPC with multiple regions
  • Shared VPC
  • Peered VPC

Here are a few recommendations that apply to each of these scenarios:

  • Use a unique subnet for the mirrored instances and collectors. This means if the mirrored sources and the collectors are in the same VPC, create multiple subnets in each region. Place the resources that need to be mirrored in one subnet and place the collectors in the other. There is no default recommended size for the collector subnet, but make sure to allocate enough space for all the collectors that might be in that region plus a little more. Remember, you can always add additional subnets to a region in Google Cloud.
  • Don’t assign public IPs to virtual IDS instances. Rather, use CloudNAT to provide egress Internet access. Not assigning a public IP to your instances helps them from being exposed externally to traffic from the internet.
  • If possible, use redundant collectors (IDS instances) behind the ILB for high availability.

Now, let’s take a look at these designs one by one. 

Single VPC with a single region
This is the simplest of all the supported designs. In this design, all mirrored sources exist in one region in a standard VPC. This is most suitable for small test environments or VPCs where network management is not dedicated to a networking team. Note that the mirrored sources, Packet Mirroring policy, collector ILB and the IDS instances, are all contained to the same region and same VPC. Lastly, CloudNAT is configured to allow the IDS instances internet access. Everything is contained in a single region, single VPC, and single project.

Single VPC with multiple regions
Because mirrored instances and collectors must be in the same region, it stands to reason that a VPC that contains subnets in multiple regions needs multiple collectors, multiple ILBs and multiple Packet Mirroring policies. To account for multiple regions, simply stamp out a similar deployment to the one above multiple times. We still recommend using CloudNAT. 

The following example shows a single VPC that spans two different regions, however, a similar architecture can be used for a VPC with any number of regions.

Shared VPC
Packet Mirroring also supports Shared VPC. In this example, the collectors (IDSs), ILB and the Packet Mirroring policy all exist inside the host project. The collectors use their own non-shared subnet. The mirrored sources (WebServers), however, exist inside their service project using a shared subnet from the Shared VPC. This allows the deployment of an IDS solution to be left up to the organization’s cloud network operations group, freeing application developers to focus on application development. CloudNAT is configured to allow the IDS instances Internet access.

Peered VPC
Packet Mirroring also supports when collectors and mirrored sources are in different VPCs that are peered together, such as in a hub-and-spoke design. The same requirements for mirroring traffic between VPCs are applicable. For example, the collector and mirrored sources must be in the same region. In the below example, the mirrored sources (WebServers) and the Packet Mirroring policy exist in VPC_DM_20 in the DM_20 project. On the other side, the ILB and collectors (IDSs) exist in the peered VPC named VPC_SECURITY in the DM_IDS project. This allows the users in the source VPC to selectively choose what traffic is forwarded to the collector across the VPC peering. CloudNAT is configured to allow the IDS instances internet access. Keep in mind the Packet Mirroring role requirements between the different projects. Proper IAM permissions must be configured.

Don’t sacrifice network visibility

Using Packet Mirroring to power a cloud IDS solution, whether it’s open-source or proprietary, is a great option that many Google Cloud customers use. The key is where to place your collectors, ILBs and the Packet Mirroring policy itself—especially when you use a more advanced VPC design. Once multiple VPCs and GCP projects get introduced into the deployment, the implementation only becomes more complex. Hopefully, this blog has shown you how to use Packet Mirroring with an IDS in some of the more common VPC designs. For a hands-on tutorial, check out QwikLabs’ Google Cloud Packet Mirroring with OpenSource IDS, which walks you through creating a VPC, building an IDS instance, installing Suricata and deploying Packet Mirroring.

By Jonny Almaleh(PSO Network Specialist)
Source: Google Cloud Blog



For enquiries, product placements, sponsorships, and collaborations, connect with us at hello@globalcloudplatforms.com. We'd love to hear from you!


Our humans need coffee too! Your support is highly appreciated, thank you!

Total
0
Shares
Previous Article
Google Cloud | Data Analytics

Introducing Apache Spark Structured Streaming Connector For Pub/Sub Lite

Next Article
Google Cloud | Training | Certifications

Earn The New Google Kubernetes Engine Skill Badge For Free

Related Posts

Schlumberger, IBM and Red Hat Announce Major Hybrid Cloud Collaboration for the Energy Industry

Historic agreement expands access to DELFI cognitive E&P environment to address customers' deployment preferences LONDON, ARMONK, N.Y. and RALEIGH, N.C., Sept. 8, 2020 /PRNewswire/ -- Schlumberger, IBM (NYSE: IBM) and Red Hat, announced today a collaboration to accelerate digital transformation across the oil and gas industry. The joint initiative will provide global access to Schlumberger's leading exploration and production (E&P) cloud-based environment and cognitive applications by leveraging IBM's hybrid cloud technology, built on the Red Hat OpenShift container platform.    Collaborative development will initially focus on two key areas: Private, hybrid or multi-cloud deployment of the DELFI* cognitive E&P environment enabled by Red Hat OpenShift to significantly expand access for customers. Delivering the first hybrid cloud implementation of the OSDU™ data platform (the open data platform for the industry). Through the agreement with IBM and Red Hat, Schlumberger has committed to the exclusive use of Red Hat OpenShift. Using the container platform will enable the deployment of applications in the DELFI environment across any infrastructure, from traditional data centers to multiple clouds, including private and public. This new way of hosting will offer the possibility to use multiple cloud providers and will address critical issues for customers, facilitating in-country deployments in compliance with local regulations and data residency requirements. The DELFI environment incorporates cutting-edge data analytics and artificial intelligence, drawing upon multiple data sources, automating workflows, and facilitating seamless collaboration for domain teams. Many more oil and gas operators, suppliers and partners, from all regions of the world will be enabled to work from the industry's leading digital environment—built on a standard, open platform—where they can 'write once and run everywhere' when creating new applications. "By expanding market access to the DELFI environment we take a major step forward on the journey to establishing the open and flexible digital environment our industry needs," comments Olivier Le Peuch, chief executive officer, Schlumberger. "Our collaboration with IBM and Red Hat complements our established digital partnerships to produce an industry-first solution to overcome our customers' challenges. Together, we are enabling seamless access to a hybrid cloud platform in all countries across the globe for deployment in any basin, for any operator." "The logic, purpose, and differentiation of all businesses can now be rendered in code, which is why digital innovation has become the most powerful way to drive transformation and hybrid cloud is the lever that unleashes it," said Arvind Krishna, chief executive officer, IBM. "Together with Schlumberger, we are empowering a much broader group of participants to play a role in driving that transformation and helping the energy industry solve some of the world's toughest challenges to emerge stronger." "The energy industry is transforming as organizations look for efficient new ways to power their operations, adopt digital technologies to create a competitive advantage, and innovate and integrate workflows to make faster and better decisions," said Paul Cormier, president and chief executive officer, Red Hat. "A hybrid cloud foundation built on open source offers the flexibility, acceleration and innovation this digital transformation requires. Schlumberger has long been an industry leader and is bold in its vision for digital transformation in the energy industry. We look forward to working closely with Schlumberger to make the DELFI environment available everywhere with Red Hat OpenShift."  Schlumberger supports many of the world's most vital oil and gas operations and is on the forefront of digitalization across the energy sector. It has established the DELFI environment as the industry-leading cognitive E&P environment where today energy professionals access open APIs to work together, independent of role, workflow or physical location, and create solutions that significantly improve business operations. The organizations intend to further their collaboration with the creation of a differentiated data management and operations solution for the OSDU™ data platform, enabling oil and gas operators to build, deploy and transition digital solutions with hybrid-cloud data infrastructures. This will foster wider collaboration and greater efficiency across many professionals in the E&P value chain. Prior to this announcement, Schlumberger, IBM and Red Hat successfully piloted the new hybrid cloud deployment of the DELFI environment on Red Hat OpenShift, the leading Kubernetes platform, working with Red Hat and IBM Services, the world's largest team of Red Hat certified consultants. The two organizations focused on demonstrating the flexibility and portability for compute, storage and data intensive exploration and field development applications. IBM's collaboration with Schlumberger is part of the company's new commitment to invest in accelerating adoption of hybrid cloud and open architectures. IBM is targeting essential industries, such as energy, running the crucial processes of the world. These efforts are increasing in importance as organizations navigate the impacts of the pandemic and economic downturn, which is creating an acute need for speed to market, flexibility and nimbleness to encourage innovation. About SchlumbergerSchlumberger is the world's leading provider of technology for reservoir characterization, drilling, production, and processing to the oil and gas industry. With product sales and services in more than 120 countries and employing approximately 85,000 people as of the end of the second quarter of 2020 who represent over 170 nationalities, Schlumberger supplies the industry's most comprehensive range of products and services, from exploration through production, and integrated pore-to-pipeline solutions that optimize hydrocarbon recovery to deliver reservoir performance sustainably. Schlumberger Limited has executive offices in Paris, Houston, London, and The Hague, and reported revenues of $32.92 billion in 2019. For more information, visit www.slb.com. About Red HatRed Hat is the world's leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future. About IBMIBM combines technology with industry expertise to help Oil & Gas clients digitally reinvent their businesses for resilience and sustainability. Pioneering advances in materials science from IBM Research accelerate energy transition. Data science and AI take the guesswork out of exploration. Predictive asset management raises production throughput. Supply chain insights and blockchain build trust and transparency across the downstream ecosystem. Customer experience experts reshape consumer connections at the gas pump or electric charge station. Through these solutions IBM helps Oil & Gas clients emerge smarter. For further information visit:  https://www.ibm.com/industries/oil-gas *Mark of Schlumberger. For further information, contact: MediaMoira DuffCorporate Communication Manager, Schlumberger LimitedOffice: +1 (713) 375-3494communication@slb.com Sarah MeronIBM Vice President, Corporate CommunicationsOffice: +1 (914) 499-6435ibmpress@us.ibm.com SOURCE IBM

Rolar de Cuyo Taps IBM Food Trust to Provide Insight on the Quality and Origin of its Extra Virgin Olive Oil

- Molinos Río de la Plata and Aceitera General Deheza are some of the clients of this family business from the province of San Juan, Argentina; - Now the company offers transparency that helps ensure the traceability of its natural products. Buenos Aires, October 2, 2020 — Rolar de Cuyo, a company producing high-quality international olive oil, and IBM (NYSE: IBM) today announce that Rolar is using IBM Blockchain to provide traceability for its olive oil across the production chain. This technology monitors the journey of the olive until it is converted into bulk oil, which is bought by renowned companies such as Aceitera General Deheza and Molinos Río de la Plata. According to the International Olive Council (IOC, Argentina produced 20,000 tons of olive oil in 2019 and Federación  Olivícola Argentina (FOA) estimates it will produce between 20,000 and 25,000 tons in 2020, with the vast majority of the country's production destined for international markets1. “ Our vision is to be the best extra virgin olive oil alternative for large companies that pack the product with their brand. For this reason, from Rolar de Cuyo we want the main extra virgin olive oil buyers worldwide to trust us and choose our products. To help achieve this, IBM blockchain technology provides the transparency we need to trace the origin of our products and support quality compliance processes to reach consumers' tables,” said Guillermo José Albornoz, Director at Rolar de Cuyo. By joining the IBM Food Trust, Rolar de Cuyo will use IBM blockchain technology on IBM Cloud to monitor the production process, from the moment the olive trees are planted to the selection and harvest of the olives to its conversion into extra virgin. Blockchain technology can help enable greater trust across the supply chain by creating an immutable, digitized record of transactions. Olive farmers and makers can all share information more efficiently using near real-time access to comprehensive product data. “We are increasingly aware of how important it is to know the origin and life cycle of the products we consume. Technology is an important ally in this regard and with IBM Food Trust we provide Rolar de Cuyo with a single source of secured and transparent information about the olives in the olive oils that they produce, helping encourage responsible food consumption throughout the population”, says Fabrizio Carbone, IBM Cloud & Data Manager at IBM Argentina. A recent IBM Institute for Business Value study found 71% of consumers surveyed will pay a premium for full transparency into the products they buy. Using blockchain technology creates a verifiable record of the production of olive oil, which can give consumers confidence in the origin and condition of the products they buy and eat. IBM Food Trust is one of the largest and most active non-crypto blockchain networks in production today, it is available as a subscription service for members of the food ecosystem to join. Several Argentine companies have joined this global network, that is not just for big companies. Recently, the startup S4—dedicated to creating technology to reduce climate and productive risk in crop production—joined the IBM Food Trust and incorporated blockchain technology into its S4Go product. In this way, the company is able to register the geographical location of the lots and their production technical information, creating visibility and responsibility in the origination chain from the initial moment. About Rolar de Cuyo We are a family business born in 2008 planting olive groves in the Province of San Juan. We produce the extra virgin olive oil that the leading brands use in their bottles. We give value to large companies with our product through a quality service. About IBM Food TrustFor more information, visit ibm.com/food. Contact:Analía CerviniIBM CommunicationsAnalia.Cervini.Gonzalez@ibm.com+55-11945636823 1https://www.oliveoiltimes.com/es/production/argentina-olive-harvest-begins-amid-covid-19-lockdown/80937