Google Cloud | Networking

How To Use Packet Mirroring For IDS In different VPC Designs

When migrating from on-premises to the cloud, many Google Cloud customers want scalable solutions to detect and alert on higher-layer network anomalies, keeping the same level of network visibility they have on-prem. The answer may be to combine Packet Mirroring with an Intrusion Detection System (IDS) such as the open-source Suricata, or some other preferred threat detection system. This type of solution can provide the visibility you need in the cloud to detect malicious activity, alert, and perhaps even implement security measures to help prevent subsequent intrusions. 

However, design strategies for Packet Mirroring plus IDS can be confusing, considering the number of available VPC design options. For instance, there’s Google’s global VPCShared VPCs and VPC Peerings. In this blog, we’ll show you how to use Packet Mirroring and virtual IDS instances in a variety of VPC designs, so you can inspect network traffic while keeping the ability to use the supported VPC options that Google Cloud provides. 

Packet Mirroring basics

But first, let’s talk some more about Packet Mirroring, one of the key tools for security and network analysis in a Google Cloud networking environment. Packet Mirroring is functionally similar to a network tap or a span session in traditional networking: Packet Mirroring captures network traffic (ingress and egress) from select “mirrored sources,” copies the traffic, and forwards the copy to “collectors.” Packet Mirroring captures the full payload of each packet, not just the headers. Also, because Packet Mirroring is not based on any sampling period, you can use it for in-depth packet-level troubleshooting, security solutions, and application-layer network analysis.

Packet Mirroring relies on a “Packet Mirroring policy” with five attributes:

  1. Region
  2. VPC network(s)
  3. Mirrored source(s)
  4. Collector (destination)
  5. Mirrored traffic (filter)

Here’s a sample Packet Mirroring policy:

When creating a Packet Mirroring policy, consider these key points:

  • Mirrored sources and collectors must be in the same region, but can be in different zones—or even different VPCs or projects.
  • Collectors must be placed behind an Internal Load Balancer (ILB).
  • Mirrored traffic consumes additional bandwidth on the mirrored sources. Size your instances accordingly.
  • The collectors see network traffic at Layer 3 and above the same way that the mirrored VMs see the traffic. This includes any NATing and/or SSL decryption that may occur at a higher layer within Google Cloud.

There are two user roles that are especially relevant for creating and managing Packet Mirroring:

  • “compute.packetMirroringUser” – This role allows users rights to create, update, and delete Packet Mirroring policies. This role is required in the project where the Packet Mirroring Policy will live.
  • “<a href="" target="_blank" rel="noreferrer noopener">compute.packetMirroringAdmin</a>” – This role allows users to mirror the desired targets to collect their traffic. 

Using Packet Mirroring to power IDS

An IDS needs to see traffic to be able to inspect it. You can use Packet Mirroring to feed traffic to a group of IDSs; this approach has some significant benefits over other methods of steering traffic to an IDS instance. For example, some cloud-based IDS solutions require special software (i.e., an agent) to run on each source VM, and that agent duplicates and forwards traffic to the IDS. With Packet Mirroring, you don’t need to deploy any agents on VMs and traffic is mirrored to IDS in a cloud-native way. And while an agent-based solution is fully distributed and prevents network bottlenecks, it requires that the guest operating system support the software. Furthermore, with an agent-based solution, CPU utilization and network traffic on the VM will most certainly increase because the guest VM and its resources are tasked with duplicating traffic. High CPU utilization related to network throughput is a leading contributor to poor VM performance.

Another common approach is to place a virtual appliance “in-line” between the network source and destination. The benefit of this design is that the security appliance can act as an Intrusion Prevention System (IPS) and actually block or deny malicious traffic between networks. However, an in-line solution, where traffic is routed through security appliances, doesn’t capture east-west traffic within VMs in the same VPC. Because subnet routes are preferred in a VPC, in-line solutions which are fed traffic via static routes, can’t alert on intra-VPC traffic. Thus, a large portion of network traffic is left unanalyzed; a traditional in-line IDS/IPS solution only inspects traffic at a VPC or network boundary. 

Packet Mirroring solves both these problems. It doesn’t require any additional software on the VMs, it’s fully distributed across each mirrored VM, and traffic duplication happens transparently at the SDN layer. The Collector IDS is placed out-of-path behind a load balancer and receives both north-south traffic and east-west traffic.

Using Packet Mirroring in various VPC configurations

Packet Mirroring works across a number of VPC designs, including:

  • Single VPC with a single region
  • Single VPC with multiple regions
  • Shared VPC
  • Peered VPC

Here are a few recommendations that apply to each of these scenarios:

  • Use a unique subnet for the mirrored instances and collectors. This means if the mirrored sources and the collectors are in the same VPC, create multiple subnets in each region. Place the resources that need to be mirrored in one subnet and place the collectors in the other. There is no default recommended size for the collector subnet, but make sure to allocate enough space for all the collectors that might be in that region plus a little more. Remember, you can always add additional subnets to a region in Google Cloud.
  • Don’t assign public IPs to virtual IDS instances. Rather, use CloudNAT to provide egress Internet access. Not assigning a public IP to your instances helps them from being exposed externally to traffic from the internet.
  • If possible, use redundant collectors (IDS instances) behind the ILB for high availability.

Now, let’s take a look at these designs one by one. 

Single VPC with a single region
This is the simplest of all the supported designs. In this design, all mirrored sources exist in one region in a standard VPC. This is most suitable for small test environments or VPCs where network management is not dedicated to a networking team. Note that the mirrored sources, Packet Mirroring policy, collector ILB and the IDS instances, are all contained to the same region and same VPC. Lastly, CloudNAT is configured to allow the IDS instances internet access. Everything is contained in a single region, single VPC, and single project.

Single VPC with multiple regions
Because mirrored instances and collectors must be in the same region, it stands to reason that a VPC that contains subnets in multiple regions needs multiple collectors, multiple ILBs and multiple Packet Mirroring policies. To account for multiple regions, simply stamp out a similar deployment to the one above multiple times. We still recommend using CloudNAT. 

The following example shows a single VPC that spans two different regions, however, a similar architecture can be used for a VPC with any number of regions.

Shared VPC
Packet Mirroring also supports Shared VPC. In this example, the collectors (IDSs), ILB and the Packet Mirroring policy all exist inside the host project. The collectors use their own non-shared subnet. The mirrored sources (WebServers), however, exist inside their service project using a shared subnet from the Shared VPC. This allows the deployment of an IDS solution to be left up to the organization’s cloud network operations group, freeing application developers to focus on application development. CloudNAT is configured to allow the IDS instances Internet access.

Peered VPC
Packet Mirroring also supports when collectors and mirrored sources are in different VPCs that are peered together, such as in a hub-and-spoke design. The same requirements for mirroring traffic between VPCs are applicable. For example, the collector and mirrored sources must be in the same region. In the below example, the mirrored sources (WebServers) and the Packet Mirroring policy exist in VPC_DM_20 in the DM_20 project. On the other side, the ILB and collectors (IDSs) exist in the peered VPC named VPC_SECURITY in the DM_IDS project. This allows the users in the source VPC to selectively choose what traffic is forwarded to the collector across the VPC peering. CloudNAT is configured to allow the IDS instances internet access. Keep in mind the Packet Mirroring role requirements between the different projects. Proper IAM permissions must be configured.

Don’t sacrifice network visibility

Using Packet Mirroring to power a cloud IDS solution, whether it’s open-source or proprietary, is a great option that many Google Cloud customers use. The key is where to place your collectors, ILBs and the Packet Mirroring policy itself—especially when you use a more advanced VPC design. Once multiple VPCs and GCP projects get introduced into the deployment, the implementation only becomes more complex. Hopefully, this blog has shown you how to use Packet Mirroring with an IDS in some of the more common VPC designs. For a hands-on tutorial, check out QwikLabs’ Google Cloud Packet Mirroring with OpenSource IDS, which walks you through creating a VPC, building an IDS instance, installing Suricata and deploying Packet Mirroring.

By Jonny Almaleh(PSO Network Specialist)
Source: Google Cloud Blog

For enquiries, product placements, sponsorships, and collaborations, connect with us at We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

Previous Article
Google Cloud | Data Analytics

Introducing Apache Spark Structured Streaming Connector For Pub/Sub Lite

Next Article
Google Cloud | Training | Certifications

Earn The New Google Kubernetes Engine Skill Badge For Free

Related Posts

IBM Unveils New Capabilities for Preserving Aging Infrastructure Using AI, 3D Modeling and Data Capture

IBM Maximo for Civil Infrastructure is designed to assist organizations in better managing, monitoring and maintaining their infrastructure assets. ARMONK, NY Oct. 14, 2020 -- IBM (NYSE: IBM) today announced new capabilities in IBM Maximo for Civil Infrastructure to help prolong the lifespan of aging bridges, tunnels, highways, and railways. New enhancements include the ability to deploy on Red Hat OpenShift for hybrid cloud environments, as well as new AI and 3D model annotation tools that can provide deep industry and task-specific insights to support engineers. By helping facilitate off-site inspections, advanced analytics and predictive maintenance, IBM Maximo for Civil Infrastructure is designed to assist organizations in better managing, monitoring and maintaining their infrastructure assets. In the United States roughly $2 trillion in infrastructure repairs were unfunded in 2015, according to the 2017 American Society of Civil Engineers Infrastructure Report Card. And around the world, the prevalence of aging infrastructure threatens the continuity of day-to-day life for citizens worldwide. Owners, operators and engineers need to be able to improve their ability to decide where, when and how to address infrastructure issues with critical assets that must endure for generations. [embedded content] IBM Maximo for Civil Infrastructure can consolidate numerous sources of data including maintenance and design details; near real-time IoT data generated from sensors; wearables from workers; stationary cameras and drones and weather data from The Weather Company, to identify and measure the impact of damage such as cracks, rust and corrosion, as well as displacement vibrations and stress. These insights can help organizations more proactively manage and prioritize infrastructure repair and reduce the need for time-intensive manual inspections and unnecessary costs. “Tools like AI, predictive maintenance, drones and hybrid cloud will play an important role in meeting the challenge of rising infrastructure costs, and helping these vital structures endure for future generations,” said Bjarne Jørgensen, Executive Director, Asset Management at Sund and Baelt. “These solutions can help determine the exact need for maintenance in near real-time to assist organizations in extending the lifetime of structures.” IBM Maximo for Civil Infrastructure incorporates AI visual recognition tools developed by IBM Research to allow civil engineers to make structures come alive using 3D modeling. Capabilities like Maximo Visual Inspection can make it easier to identify defects, their root-cause, and place them in the context of the greater structure to perform rapid assessments to better prioritize maintenance decisions that target critical repairs. These tools can be increasingly important for future engineers as skills availability may be a challenge. "Infrastructure maintenance is a problem that’s being compounded from all sides: Bridges are getting older, payloads are getting larger, and the necessary preventive actions and maintenance are often postponed due to lack of funding,” Jørgensen added. "With Maximo for Civil Infrastructure, IBM is introducing a solution that addresses the problem from all sides, using IoT and AI technology to administer more proactive repairs, maintain invaluable institutional and engineering knowledge, and better prioritize resources.” “Maximo for Civil Infrastructure was developed with input from some of the largest operators of infrastructure in the world so that IBM’s powerful technology across AI and IoT is carefully adapted to the unique needs of civil engineers,” said Joe Berti, VP of AI Applications at IBM. “With these tools we believe civil engineers will be able to innovate and improve the process of monitoring, maintaining and preserving important structures around the world.” IBM Maximo for Civil Infrastructure provides the following new capabilities, in addition to the core offerings available as part of the IBM Maximo Application Suite. Maximo Application Suite licensing and Open Shift deployment: With a single license, customers can now deploy asset management, sensor integration, advanced data analytics including AI functionality and visual analytic capability. Capabilities including Monitor and Health can be deployed now on RedHat Open Shift, which allows customers to more quickly deploy, manage and scale their hybrid cloud deployments with ease. Defect Management: A new user interface that records defect information, adds multi-variable defect rankings, attaches pictures, and stores history for defects. Structural defects do not exist in isolation, they can affect everything they touch and the integrity of the overall structure. By comparing detected defects against work history, sensor data, weather and traffic data and more, AI can help engineers better identify root causes and patterns that indicate a defect may reoccur.  Improved 3D Visualization: Most serious defects are located within the structure and are not necessarily visible from the outside. However, new tools within the Maximo BIM viewer plugin allow users to add annotations to their standard 3D models, giving users access to a 3D representation of an asset, for example a pillar or beam, where all the defects have been annotated on that asset.  Asset Loader Improvements: While every piece of civil infrastructure is unique, many share common hierarchies of assets, and some organizations have hundreds or even thousands of similar bridges that need to be defined in the asset management system. To streamline the process, a new tool to better manage import and export of an asset hierarchy, including a new UI to manage imports and exports and the process of selecting files, is now available.  IBM Maximo for Civil Infrastructure integrates 30 years of recognized industry-leading infrastructure asset management with best-in-class models from the world's premier infrastructure firms. It helps operators and engineers address one of the world's largest and most complex challenges — extending the lifespan of critical structures under frequently changing conditions. You can read more about how some of its capabilities were developed here. About IBM MaximoPowered by IBM's investments in artificial intelligence, fueled by IoT data, and built for hybrid cloud, The IBM Maximo Application Suite is extending its leadership as one of the most trusted enterprise asset management systems on the planet. And with new investments in remote monitoring, computer vision and AI-powered anomaly detection, it is poised to remain a leading solution for tomorrow's asset management challenges, empowering Operational Technology (OT) and Information Technology (IT) leaders with a comprehensive view into asset performance. For more information please visit: Media Contact:Holli HaswellIBM Director, External

IBM Enters Esports Arena with Activision Blizzard in New Multi-Year Deal as the Presenting Partner of the Overwatch League Grand Finals

IBM to help power interactive and analytical content for the Overwatch League, engaging fans and players through cloud, IBM's Watson artificial intelligence, and machine learning ARMONK, N.Y. and IRVINE, Calif., Oct. 9, 2020 /PRNewswire/ -- Today, IBM and the Overwatch League™, the world's first global esports league with 20 city-based teams, announced its first-of-a-kind multi-year deal to bring IBM's cloud computing and artificial intelligence (AI) technology to esports. A sponsorship component kicks off at this year's 2020 Overwatch League Grand Finals, which started Oct. 8 and run through Oct. 10. With the agreement – IBM's first-ever esports deal – the technology leader becomes the official AI, cloud, and analytics sponsor of the Overwatch League. The deal covers the 2020 Grand Finals and the entirety of the 2021 and 2022 Overwatch League seasons. Through the course of the deal, IBM and the Overwatch League plan to leverage IBM's suite of advanced cloud and AI products. Starting in 2021, IBM and the Overwatch League will be working closely to develop technology solutions leveraging IBM's expertise in natural language processing and machine learning to improve the league's rankings system, and utilize IBM's Watson AI capabilities to create live and in-broadcast predictive analysis, with specific solutions currently under development. "We're constantly striving to give fans the ability to more meaningfully engage with the Overwatch League, and teaming up with IBM enables us to do just that," said Brandon Snow, Chief Revenue Officer of Activision Blizzard Esports. "IBM's cutting-edge AI and machine learning, underpinned by cloud technologies, will help enable us to bring interactive and innovative engagement opportunities to our fans and teams. This is a fantastic benefit to our viewers, and we're very excited to be announcing it during the biggest event of our season." "We're thrilled to bring our world class Cloud platforms and Watson portfolio to one of the leading esports platforms in the world," said Noah Syken, Vice President of Sports & Entertainment Partnerships, IBM. "The solutions we're partnering to create can help the organizations, players, and fans enjoy an even more immersive, engaging experience. We look forward to working with the Overwatch League, to bring innovative solutions at scale, to broad and relevant audiences." The sponsorship with the Overwatch League will kick off in 2021 and include new solutions, that plan to be hosted on the IBM Cloud. Those solutions, which are currently under development, will leverage IBM's suite of analytics tools to process in-match data, adding Watson's AI capabilities through the use of Watson Machine Learning and the AutoAI functionality within Watson Studio. Additionally, the solutions will find new ways to showcase the wide array of historical statistics that the Overwatch League gets from live play.  The announcement of IBM and the Overwatch League teaming together is IBM's first foray into the global world of esports and joins an impressive list of sports and entertainment partnerships that IBM technology powers around the world. IBM has been a part of gaming history as a hardware manufacturer, chip creator, and software provider for developers since 1985. Over the past 35 years, IBM has worked with gaming and esports entities through PowerPC processors, NLP, and cloud technology. IBM has been a long-time partner to a range of sports organizations, teams, and events, including tennis (US Open and Wimbledon), golf (The Masters), football (via ESPN Fantasy Football), and the GRAMMY Awards. Through IBM's partnerships, including IBM's new engagement with the Overwatch League, IBM is able to reimagine sports, and entertainment experiences for fans, players and coaches alike, leveraging the flexibility of the hybrid cloud, and other AI-powered solutions. About IBMFor more information about IBM, visit About the Overwatch League™The Overwatch League™ is the first major global professional esports league with city-based teams across Asia, Europe, and North America. Overwatch® was created by globally acclaimed publisher Blizzard Entertainment (a division of Activision Blizzard—Nasdaq: ATVI), whose iconic franchises have helped lay the foundations and push the boundaries of professional esports over the last 15 years. The latest addition to Blizzard's stable of twenty-two #1 games,* Overwatch was built from the ground up for online competition, with memorable characters and fast-paced action designed for the most engaging gameplay and spectator experiences. To learn more about the Overwatch League, visit About Blizzard Entertainment, Inc.Best known for blockbuster hits, including World of Warcraft®, Hearthstone®, Overwatch®, the Warcraft®, StarCraft®, and Diablo® franchises, and the multi-franchise Heroes of the Storm®, Blizzard Entertainment, Inc. (, a division of Activision Blizzard (Nasdaq: ATVI), is a premier developer and publisher of entertainment software renowned for creating some of the industry's most critically acclaimed games. Blizzard Entertainment's track record includes twenty-two #1 games* and multiple Game of the Year awards. The company's online gaming service, Blizzard®, is one of the largest online-gaming services in the world, with millions of active players. *Sales and/or downloads, based on internal company records and reports from key distributors. About Activision Blizzard EsportsActivision Blizzard Esports (ABE) is responsible for the development, operation, and commercialization of Activision Blizzard's professional gaming properties including the Overwatch League™, the Call of Duty® League™, Call of Duty Challengers™, Hearthstone® Masters, StarCraft® II esports, Warcraft® III: Reforged™, and the World of Warcraft® Arena World Championship and Mythic Dungeon International, among others. ABE also operates Tespa, the leader in collegiate esports. It is ABE's vision to be the most innovative, scalable, and valuable developer of global competitive entertainment. Cautionary Note Regarding Forward-looking Statements: Information in this press release that involves the Overwatch League's expectations, plans, intentions or strategies regarding the future, including statements about the collaboration with IBM, including the development of technology solutions, are forward-looking statements that are not facts and involve a number of risks and uncertainties. Factors that could cause the Overwatch League's actual future results to differ materially from those expressed in the forward-looking statements set forth in this release include unanticipated product delays, the COVID-19 pandemic and other factors identified in the risk factors sections of Activision Blizzard's most recent annual report on Form 10-K and any subsequent quarterly reports on Form 10-Q. The forward-looking statements in this release are based upon information available to the Overwatch League and Activision Blizzard as of the date of this release, and neither the Overwatch League nor Activision Blizzard assumes any obligation to update any such forward-looking statements. Forward-looking statements believed to be true when made may ultimately prove to be incorrect. These statements are not guarantees of the future performance of the Overwatch League or Activision Blizzard and are subject to risks, uncertainties and other factors, some of which are beyond its control and may cause actual results to differ materially from current expectations. Contacts IBM Angelena Overwatch LeagueKevin SOURCE IBM

IBM Reveals Next-Generation IBM POWER10 Processor

ARMONK, N.Y., Aug. 17, 2020 /PRNewswire/ -- IBM (NYSE: IBM) today revealed the next generation of its IBM POWER central processing unit (CPU) family: IBM POWER10. Designed to offer a platform to meet the unique needs of enterprise hybrid cloud computing, the IBM POWER10 processor uses a design focused on energy efficiency and performance in a 7nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the IBM POWER9 processor.1   Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor is an important evolution in IBM's roadmap for POWER. Systems taking advantage of IBM POWER10 are expected to be available in the second half of 2021. Some of the new processor innovations include: IBM's First Commercialized 7nm Processor that is expected to deliver up to a 3x improvement in capacity and processor energy efficiency within the same power envelope as IBM POWER9, allowing for greater performance.1 Support for Multi-Petabyte Memory Clusters with a breakthrough new technology called Memory Inception, designed to improve cloud capacity and economics for memory-intensive workloads from ISVs like SAP, the SAS Institute, and others as well as large-model AI inference. New Hardware-Enabled Security Capabilities including transparent memory encryption designed to support end-to-end security. The IBM POWER10 processor is engineered to achieve significantly faster encryption performance with quadruple the number of AES encryption engines per core compared to IBM POWER9 for today's most demanding standards and anticipated future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption. It also brings new enhancements to container security. New Processor Core Architectures in the IBM POWER10 processor with an embedded Matrix Math Accelerator which is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16 and INT8 calculations per socket respectively than the IBM POWER9 processor to infuse AI into business applications and drive greater insights. "Enterprise-grade hybrid clouds require a robust on-premises and off-site architecture inclusive of hardware and co-optimized software," said Stephen Leonard, GM of IBM Cognitive Systems. "With IBM POWER10 we've designed the premier processor for enterprise hybrid cloud, delivering the performance and security that clients expect from IBM. With our stated goal of making Red Hat OpenShift the default choice for hybrid cloud, IBM POWER10 brings hardware-based capacity and security enhancements for containers to the IT infrastructure level." IBM POWER10 7nm Form Factor Delivers Energy Efficiency and Capacity Gains IBM POWER10 is IBM's first commercialized processor built using 7nm process technology. IBM Research has been partnering with Samsung Electronics Co., Ltd. on research and development for more than a decade, including demonstration of the semiconductor industry's first 7nm test chips through IBM's Research Alliance. With this updated technology and a focus on designing for performance and efficiency, IBM POWER10 is expected to deliver up to a 3x gain in processor energy efficiency per socket, increasing workload capacity in the same power envelope as IBM POWER9. This anticipated improvement in capacity is designed to allow IBM POWER10-based systems to support up to 3x increases in users, workloads and OpenShift container density for hybrid cloud workloads as compared to IBM POWER9-based systems. 1 This can affect multiple datacenter attributes to drive greater efficiency and reduce costs, such as space and energy use, while also allowing hybrid cloud users to achieve more work in a smaller footprint. Hardware Enhancements to Further Secure the Hybrid Cloud IBM POWER10 offers hardware memory encryption for end-to-end security and faster cryptography performance thanks to additional AES encryption engines for both today's leading encryption standards as well as anticipated future encryption protocols like quantum-safe cryptography and fully homomorphic encryption. Further, to address new security considerations associated with the higher density of containers, IBM POWER10 is designed to deliver new hardware-enforced container protection and isolation capabilities co-developed with the IBM POWER10 firmware. If a container were to be compromised, the POWER10 processor is designed to be able to prevent other containers in the same Virtual Machine (VM) from being affected by the same intrusion. Cyberattacks are continuing to evolve, and newly discovered vulnerabilities can cause disruptions as organizations wait for fixes. To better enable clients to proactively defend against certain new application vulnerabilities in real-time, IBM POWER10 is designed to give users dynamic execution register control, meaning users could design applications that are more resistant to attacks with minimal performance loss. Multi-Petabyte Size Memory Clustering Gives Flexibility for Multiple Hybrid Deployments IBM POWER has long been a leader in supporting a wide range of flexible deployments for hybrid cloud and on-premises workloads through a combination of hardware and software capabilities. The IBM POWER10 processor is designed to elevate this with the ability to pool or cluster physical memory across IBM POWER10-based systems, once available, in a variety of configurations. In a breakthrough new technology called Memory Inception, the IBM POWER10 processor is designed to allow any of the IBM POWER10 processor-based systems in a cluster to access and share each other's memory, creating multi-Petabyte sized memory clusters. For both cloud users and providers, Memory Inception offers the potential to drive cost and energy savings, as cloud providers can offer more capability using fewer servers, while cloud users can lease fewer resources to meet their IT needs.  Infusing AI into the Enterprise Hybrid Cloud to Drive Deeper Insights As AI continues to be more and more embedded into business applications in transactional and analytical workflows, AI inferencing is becoming central to enterprise applications. The IBM POWER10 processor is designed to enhance in-core AI inferencing capability without requiring additional specialized hardware. With an embedded Matrix Math Accelerator, the IBM POWER10 processor is expected to achieve 10x, 15x, and 20x faster AI inference for FP32, BFloat16 and INT8 calculations respectively to improve performance for enterprise AI inference workloads as compared to IBM POWER9,2 helping enterprises take the AI models they trained and put them to work in the field. With IBM's broad portfolio of AI software, IBM POWER10 is expected to help infuse AI workloads into typical enterprise applications to glean more impactful insights from data. Building the Enterprise Hybrid Cloud of the Future With hardware co-optimized for Red Hat OpenShift, IBM POWER10-based servers will deliver the future of the hybrid cloud when they become available in the second half of 2021. Samsung Electronics will manufacture the IBM POWER10 processor, combining Samsung's industry-leading semiconductor manufacturing technology with IBM's CPU designs. Read more about how IBM POWER10 is expected to impact the enterprise hybrid cloud market, here: About IBM For more information about IBM Power Systems visit: Red Hat, Red Hat Enterprise Linux, the Red Hat logo and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. CONTACT: Sam Ponedal  1 3X performance is based upon pre-silicon engineering analysis of Integer, Enterprise and Floating Point environments on a POWER10 dual socket server offering with 2x30-core modules vs POWER9 dual socket server offering with 2x12-core modules; both modules have the same energy level. 2 10-20X AI inferencing improvement is based upon pre-silicon engineering analysis of various workloads (Linpack. Resnet-50 FP32, Resnet-50 BFloat16, and Resnet-50 INT8) on a POWER10 dual socket server offering with 2x30-core modules vs POWER9 dual socket server offering with 2x12-core modules. SOURCE IBM