³Ô¹ÏºÚÁÏ

³Ô¹ÏºÚÁÏ Blogs

³Ô¹ÏºÚÁÏ Newsroom

Posts Tagged 'DSP'

  • March 17, 2026

    Structera S: Scaling the AI Memory Wall with CXL Switching

    By Jianping Jiang, Head of Product Marketing, CXL Switch, ³Ô¹ÏºÚÁÏ

    The AI memory wall¡ªthe widening gap between the memory capacity and bandwidth AI infrastructure wants and the amount that conventional memory architectures can deliver¡ªis accelerating at an alarming pace.

    And the consequences are getting increasingly ominous for data center operators and their customers: idle XPUs, underutilized equipment, longer processing times, higher costs, and ultimately a lower return on investment. Meanwhile, memory¡ªalready second only to GPUs in datacenter semiconductor spend1¡ªcontinues to soar in price.

    The ³Ô¹ÏºÚÁÏ? StructeraTM S family of Compute Express Link (CXL) switches scale the memory wall by providing a pathway for adding terabytes of shareable memory to infrastructure and dynamically allocating bandwidth and capacity to boost utilization and application performance. CXL switches don¡¯t just boost memory and memory capacity; they enable data center operators to use it more wisely too.

    Structera S is the successor to the groundbreaking Apollo line of CXL switches developed by XConn Technologies, now part of ³Ô¹ÏºÚÁÏ. Structera S 20256 for PCIe Gen 5.0/CXL 2.0 (previously the ) became the first commercially available CXL switch upon its release last year.

    ³Ô¹ÏºÚÁÏ is expanding the family with Structera S 30260 for PCIe 6.0/CXL 3.x. Structera S 30260 features support for 16 or 32 CPUs or GPUs over 260 lanes with up to 48TB of shared memory and 4TB/second cumulative bandwidth. ³Ô¹ÏºÚÁÏ is showcasing Structera S 30260 in a live demonstration this week at OFC 2026 and plans on sampling to customers in 3Q 2026.

  • March 17, 2026

    The Next Step for PCIe: Scale-up Fabrics for AI

    By Krishna Mallampati, Senior Director of Product Marketing, Data Center Switching, ³Ô¹ÏºÚÁÏ

    Since its introduction in 2004, PCIe? has become the most popular interconnect for low-latency chip-to-chip connections. From its humble beginnings for fan-out interconnects, PCIe has been integrated into AI and cloud servers, JBOF storage systems, ADAS systems in automotive, industrial automation, PCs, and other platforms.

    Scale-up AI servers¡ªwhich can contain hundreds of processors spread over multiple racks¡ªrepresent the next logical step for PCIe. Although far larger than today¡¯s single chassis AI servers, scale-up servers demand the same thing from interconnect fabrics: coherent, low-latency links that enable fast, secure communication between components. PCIe¡¯s status as a widely-used standard that evolves to meet customer demands further puts it in the forefront for scale-up.

    Let¡¯s explore the PCIe scale-up usage model and how these architectures will evolve.

    PCIe Scale-up Usage Model

    PCIe Scale-up Usage Model

  • March 16, 2026

    ³Ô¹ÏºÚÁÏ Joins XPO MSA To Accelerate Innovation in AI Optical Modules

    By Xi Wang, Senior Vice President and General Manager of the Connectivity Business Unit, ³Ô¹ÏºÚÁÏ

    ³Ô¹ÏºÚÁÏ has become a founding member of the eXtra dense Pluggable Optics (XPO) Multi-Source Agreement (MSA), an industry initiative organized by Arista Networks to define a new optical transceiver form factor purpose-built for AI-scale infrastructure.

    The XPO concept is designed to dramatically increase bandwidth density by enabling liquid cooling at the module level. XPO modules are substantially larger in size than octal small form factor pluggable (OFSP) modules commonly deployed in today¡¯s data centers, but they deliver a step-function increase in performance. Each XPO module integrates 64 lanes operating at 200 Gbps, eight times more than current pluggable modules for a total of 12.8 Tbps of bandwidth per module.1

    This leap in bandwidth is enabled in part by an integrated cold plate that can deliver up to 400W of cooling per module. The combination of larger modules, significantly higher lane counts, and liquid cooling delivers a four-fold increase in bandwidth density for switches across scale-up, scale-out or scale-across network architecture.

  • October 19, 2023

    Shining a Light on ³Ô¹ÏºÚÁÏ Optical Technology and Innovation in the AI Era

    By Kristin Hehir, Senior Manager, PR and Marketing, ³Ô¹ÏºÚÁÏ

    The sheer volume of data traffic moving across networks daily is mind-boggling almost any way you look at it. During the past decade, global internet traffic grew?, according to the International Energy Agency. One contributing factor to this growth is the popularity of mobile devices and applications: Smartphone users spend an average of?, or nearly 1/3 of their time awake, on their devices, up from three hours just a few years ago. The result is incredible amounts of data in the cloud that need to be processed and moved. Around?, or the data traffic inside data centers. Generative AI, and the exponential growth in the size of data sets needed to feed AI, will invariably continue to push the curb upward.

    Yet, for more than a decade, total power consumption has stayed relatively flat thanks to innovations in storage, processing, networking and optical technology for data infrastructure. The debut of PAM4 digital signal processors (DSPs) for accelerating traffic inside data centers and coherent DSPs for pluggable modules have played a large, but often quiet, role in paving the way for growth while reducing cost and power per bit.

    ³Ô¹ÏºÚÁÏ at ECOC 2023

    At ³Ô¹ÏºÚÁÏ, we¡¯ve been gratified to see these technologies get more attention. At the recent European Conference on Optical Communication, Dr. Loi Nguyen, EVP and GM of Optical at ³Ô¹ÏºÚÁÏ, talked with Lightwave editor in chief, Sean Buckley, on how ³Ô¹ÏºÚÁÏ 800 Gbps and 1.6 Tbps technologies will enable AI to scale.? ?

  • October 18, 2023

    An Extreme Makeover for Data Centers

    By Dr. Radha Nagarajan, Senior Vice President and Chief Technology Officer, Optical and Cloud Connectivity Group, ³Ô¹ÏºÚÁÏ

    This article was originally published in?

    People or servers??

    Communities around the world are debating this question as they try to balance the plans of service providers and the concerns of residents. ?

    Last year, the Greater London Authority told real estate developers that new housing projects in West London may not be able to go forward until 2035 because?1.?2 said it won¡¯t accept new data center applications until 2028.?3 and Amsterdam have placed strict limits on new facilities. Cities in the southwest and?4, meanwhile, are increasingly worried about water consumption as mega-sized data centers?5.??

    When you add in the additional computing cycles needed for AI and applications like ChatGPT, the outline of the conflict becomes more heated.??

    On the other hand, we know we can¡¯t live without them. Modern society, with remote work, digital streaming and modern communications all depend on data centers. Data centers are also one of sustainability¡¯s biggest success stories. Although workloads grew by approximately 10x in the last decade with the rise of SaaS and streaming,?6 of worldwide electricity thanks to technology advances, workload consolidation, and new facility designs. Try and name another industry that increased output by 10x with a relatively fixed energy diet??

Archives