Category: Data Center


Are Supply Chain Issues and Extended Fiber Cabling Lead Times Delaying Your Network and Data Center Projects?

By Tony Walker,

2021-11-supply-chain

2021-11-supply-chain

Despite the global economy slowly starting to recover, one rather destructive issue left in the wake of the waning Covid-19 pandemic is the major disruption to the global supply chain. Previously existing inefficiencies in the supply chain have been compounded by border restrictions, labor and material shortages, skyrocketing demand following lockdowns, weather events, and geopolitical factors (just to name a few) that have left bottlenecks in every link of the supply chain – all while driving prices and lead times to all-time highs. The Institute for Supply Management’s latest survey of purchasing managers shows that the average lead time for production materials increased from 15 days to 92 days in the third quarter this year, the highest levels seen since 1987.

While many of the containers piling up in ports contain consumer goods, the information communications technology industry is certainly not immune to this crisis—especially considering the shortage on raw materials, chips, capacitors, and resistors used in network equipment and sub-component assemblies. With many planned upgrades put on hold at the onset of the pandemic now ready to ramp up, data center and network infrastructure owners and operators are increasingly frustrated by long lead times that hinder the very thing that Covid-19 accelerated—the hunger for more information and connectivity.

In the face of these challenges, leading industry manufacturers are innovating to maintain resiliency and meet customer expectations by diversifying suppliers and reducing reliance on overseas sources, as well as implementing advanced inventory management strategies such as increased forecasting, localizing production, and expanding distribution plans. At Siemon, we’re taking it one step further with guaranteed expedited shipping on mission critical fiber cabling solutions through our FiberNOW™ fast ship program.

Leveraging new advances in logistics and adding extensively to Siemon’s already best-in-class ISO 9001 manufacturing and warehousing capabilities, FiberNOW’s extensive list of multimode and singlemode fiber cabling and connectivity products are guaranteed to leave our facility in or less 5 days – including both standard and custom configurable options!

FiberNOW solutions include plug-and-play LC and MTP OM4 and OS2 jumpers and trunks, MTP-LC assemblies, cassettes, modules, and enclosures—everything today’s network data center owners and operators need to deploy 10 to 400 Gigabit switch-to-switch and switch-to-server links. The FiberNOW solution allows new services and applications to be deployed quickly to meet the ever demand.

Instead of being frustrated with supply chain issues and long lead times that are delaying your data center network upgrades and expansions, get the fiber you need now with Siemon’s FiberNOW program.

Learn more: FiberNOW™ – Fast Ship Program

  Category: Data Center, Fiber
  Comments: Comments Off on Are Supply Chain Issues and Extended Fiber Cabling Lead Times Delaying Your Network and Data Center Projects?

How Far Can You Go with Top-of-Rack?

By Ryan Harris,

2021-11-top-of-rack

2021-11-top-of-rack

In a previous blog, we discussed how SFP direct attach cables (DACs) will support most enterprise server speeds in top-of-rack (ToR) switch-to-server deployments well into the future, with their ability to support downlink speeds of 25 Gig via SFP28 DACs and the potential for 50 Gig via emerging SFP56 DACs. But the truth is that larger cloud and enterprise data centers are already seeing the need for 100 Gig server speeds, with the need for even faster speeds expected in the future.

But will ToR deployments continue to support switch-to-server links at these next-generation speeds? Let’s take a closer look at the technology and key considerations involved.

How Do We Get There?

The ability to support faster transmission speeds has a lot to do with binary encoding schemes used to convert data into digital signals. While we won’t delve into the physics behind encoding, it’s essentially the process of turning data into binary bits (i.e., 1’s and 0’s) via discrete voltage levels. The most common encoding scheme that has long been used in data transmission is non-return-to-zero (NRZ), which uses two different voltage levels for the two binary digits where positive voltage represents a “1” and negative voltage represents “0” (also referred to as a two-level pulse amplitude modulation, or PAM2). NRZ encoding has significantly evolved over the past few decades and is primarily used to support bit rates of 1, 10, and 25 Gb/s per lane in data center links.

When we look at small form-factor pluggable technology, single-lane SFP+ and SFP28 high-speed interconnects that support 10 and 25 Gig respectively are based on NRZ encoding. For higher speeds, 4-lane QSFP+ and QSFP28 interconnects that support 40 and 100 Gig are also based on NRZ-QSFP+ at 10 Gb/s bit rate per lane and QSFP28 at 25 Gb/s bit rate per lane. Technically, because NRZ can support a bit rate of 50 Gb/s, it seems logical that a single-lane SFP interconnect would support 50 Gig and a 4-lane QSFP interconnect would support 200 Gig using NRZ encoding. However, with NRZ speeds above a 25 Gb/s bit rate, channel loss becomes a problem. Enter four-level pulse amplitude modulation, or PAM4.

2021-11-top-of-rack

PAM4 encoding offers twice the bit rate per the same signal period of NRZ by using four voltage levels instead of two, supporting 50 and 100 Gb/s bit rates without an increase in channel loss. For small form-factor pluggable technology, PAM4 now gives us single-lane SFP56 interconnects for 50 Gig and four-lane QSFP56 interconnects for 200 Gig. PAM4 is also what enables 400 Gig applications-the double-density 8-lane QSFP-DD form factor relies on the PAM4 50 Gb/s bit rate to achieve 400 Gig (i.e., 50 Gb/s X 8 lanes), which is ideal for switch-to-switch deployments. Unfortunately, the increased throughput offered by PAM4 comes a cost.

What the FEC?

PAM4 encoding is far more susceptible to noise than NRZ. To improve performance and counteract any potential errors cause by increased noise, PAM4 signals use advanced forward error correction (FEC). FEC works by adding redundant data that the receiver can check and use to correct errors and recover the original data without the need for signal retransmission. PAM4 requires FEC, but it adds latency, typically in the order of 100 to 500 milliseconds (ms).

While there are low-latency FEC developments happening behind the scenes to try and cut the delay by as much as 50%, the fact remains that some applications will simply not tolerate the delay. For applications like financial trading, edge computing, interactive gaming, video conferencing, virtual and augmented reality, artificial intelligence, real-time monitoring, and data analytics, any latency over 100 ms can impact performance. In gaming for example, latency over 100 ms means a noticeable lag for players. For data centers looking to support these applications, latency in switch-to-server connections is a consideration.

Due to the added latency of FEC with PAM4, the highest-speed and lowest-latency option is currently the 4-lane QSFP28 DAC that supports 100 Gig using NRZ and a 25 Gb/s bit rate that does not require FEC up to 3-meters. While most enterprise data centers are just starting to shift to 25 Gig server connections using single-lane SFP28 DACs, 4-lane QSFP28 DACs enable migration to high-speed, low-latency 100 Gig server connections to support emerging real-time applications.

What are the Options?

For existing NRZ encoding, Siemon currently offers several options for using high-speed interconnects in the data center to support from 10 to 100 Gig switch-to-server links. These include direct connections in short-reach (1 to 3 meter) links for in-cabinet ToR deployments using DACs or in longer-reach (1 to 20 meter) links for cabinet-to-cabinet deployments (e.g., end-of-row) using active optical cables (AOCs). For breakout applications where a single switch port connects to multiple lower-speed servers, Siemon also offers a variety of hybrid breakout assemblies for both DACs and AOCs. As highlighted in a previous blog, when choosing between the DACs and AOCs, it’s important to consider density, distance, power consumption, scalability and interoperability, as well as overall cost and availability. Siemon’s current offering includes the following:

2021-11-top-of-rack-ToRDistance

  • SFP+ DACs and AOCs for 10 Gig links
  • SFP28 DACs and AOCs for 25 Gig links
  • QSFP+ DACs and AOCs for 40 Gig links
  • QSFP28 DACs and AOCs for 100 Gig links
  • QSFP+ to 4 SFP+ DACs and AOCs for 4X10 Gig breakout links
  • QSFP28 to 4 SFP28 DACs and AOCs for 4X25 Gig breakout links

More to Come

With the introduction of PAM4 encoding, DACs and AOCs will enable future 200 and 400 Gig links. You can rest assured that Siemon has an eye on the market with plans to introduce PAM4 high-speed interconnects when these higher-speed switch-to-server links come to fruition. Future PAM4 options to be on the lookout for include:

  • QSFP56 DACs and AOCs for 200 Gig links
  • QSFPDD to 2 QSFP56 DACs and AOCs for 2×200 Gig breakout links

And it doesn’t stop there. While PAM4 to NRZ conversion technology can be used with AOCs to support a 4X100 Gig breakout application using QSFPDD-to-QSFP28 hybrid assemblies, cost is always a factor. Developments are therefore underway for more cost-efficient PAM4-to-PAM4 breakout applications with DACs using a QSFPDD to two-lane (i.e., double density) SFP interface and a PAM4 50 Gb/s bit rate. However, it is still unclear which double-density SFP interface, either the SFP-DD or the DSFP, will become the predominant connector for this 4X100 Gig breakout solution. So what does this all mean for your data center?

The key takeaway is that the introduction of PAM4 encoding technology enables DACs and AOCs to support from 10 to 400 Gig direct links, including the lowest-latency 100 Gig option of QSFP28 DACs that use NRZ encoding technology for emerging real-time applications. And that means that ToR switch-to-server deployments with DACs are here to stay and will get you where you need to go.

2022-06 HSI Cable Guide

  Category: Data Center
  Comments: Comments Off on How Far Can You Go with Top-of-Rack?

Choosing the Right Infrastructure for Your Next Server Migration

By Ryan Harris,

2021-10-server-migration

2021-10-server-migration-graph

Server speeds in the data center space have been consistently on the rise over the past decade. To support the increasing amount and size of data for emerging IoT, AI-intensive, and edge-computing applications, cloud data centers are now migrating to 100 Gig downlink connections to servers. In fact, a recent report from Crehan Research Inc. indicates that shipments of 100 Gig switch ports have surpassed 10 Gig in the market.

In the enterprise market, data centers are migrating to 25 Gig server technologies that have become readily available in the market at a price of only 20% more than 10 Gig, offering 2.5 times faster performance over 10 Gig to keep up with digital transformation trends.

Depending on the size and scope of your data center, migrating your server connections to these next-generation speeds is not a question of if, but a question of when. But it’s also a question of how. Do you go with point-to-point connections using Direct Attach Copper Cables (DACs) or Active Optical Cables (AOCs), or do you take a structured cabling approach with transceivers and duplex fiber connectivity? Identifying the right infrastructure for your data center environment can go a long way in easing migration and reducing cost. Let’s take a look at the key considerations.

Distance and Density Scenarios

The first thing data center operators want to look at when choosing infrastructure for switch-to-server connections is the distance they need to support. This often has a lot to do with your overall data center design, such as whether you deploy top-of-rack (ToR) switches in each server cabinet, a middle-of-row (MoR) or end-of-row (EoR) configuration where switches reside in separate cabinet, or a distributed environment where switches reside in a different location altogether.

QSFP28 DACs for 100 Gig and SFP28 DACs for 25 Gig are ideal for ToR deployment where you only need to support short lengths of up to 3 meters within the same cabinet. Depending on your specific application needs, you may also need to consider latency-the amount of time it takes for a bit of data to travel from between the switch and the server. Real-time applications like AI, virtual reality, data analytics, gaming, financial trading, and other emerging technologies require much lower latency. As a pass-through copper cabling solution, DACs up to 3 meters in length used in a ToR configuration offer the lowest latency because they do not require forward error correction (FEC) that adds redundant data to help detect and correct some transmission errors.

For MoR and EoR deployments with slightly longer lengths, you’ll need to do a closer comparison between AOCs and structured cabling based on your specific environment. AOCs can support up to 100 meters but are typically better suited for 10 to 15 meters within a row. Once you get outside of a row, typically beyond 15 meters, AOCs can be difficult to route and manage and structured cabling might be the better choice-especially if dealing with higher densities. If you’re an enterprise data center with 30 or 40 servers housed in a few cabinets and your distances don’t require structure cabling, DACs and AOCs offer the fastest, easiest deployment-simply plug one end into the switch and the other to the server. For data centers that have hundreds or even thousands of server connections outside of the switch cabinet, multi-connection structured cabling can ease management via high-density patching environments.

The Price of Power Consumption

Power consumption is another consideration when it comes to cost. The use of ToR switches with passive DACs offers the lowest power consumption per port. With their embedded transceivers, AOCs consume slightly more power than using DACs. Structured cabling with transceivers are the most power hungry of the three options, typically consuming upwards of 1.2 Watts per port for 25 Gig and 3.5 Watts for 100 Gig. As shown below, when compared to structured cabling for 100 and 25 Gig, AOCs offer 33% and 49% less port power and DACs offer 94% and 97% less port power respectively.

2021-10-server-migration-1

Overall Cost and Availability Impact

As a data center professional, you know that the solution you choose often comes down to price. From an overall material cost perspective, DACs are by far the least expensive option. AOCs with their embedded transceivers carry a higher cost than DACs but are typically still considered a cost-effective alternative to structured cabling. While your distance and density may require structured cabling, the material cost of cabling and separate transceivers makes this option the most expensive. When you then add the cost of power consumption, which includes associated cooling costs, the overall system cost of using AOC and DACs can be anywhere from 30 to 70% lower than structured cabling.

Availability also comes into play. For example, some providers of DACs and AOCs offer limited full meter lengths and no color options, which can be frustrating if you’re trying to reduce slack or need a fast, straightforward way to color code applications. Lead times and labor may also be a consideration, especially if you’re under the gun to bring services online as quickly as possible. Structured fiber cabling assemblies are typically made to order, requiring longer lead times, and installation takes longer than point-to-point DACs and AOCs and requires more space for patching.

Scalability and Interoperability Factors

For data centers that need to grow as they go, scalability is another consideration. Depending on switch port configuration and autonegotiation capabilities, one of the benefits of DACs and AOCs is their ability to support 25 and 100 Gig switch connections with backwards compatibility. For example, SFP28 DACs share the same mating interface as SFP and SFP+ solutions used in 1 and 10 Gig server connections, while QSFP28 DACs share the same interface as QSFP+ solutions used in 40 Gig server connections. This means higher-speed switches can support legacy SFP+ and QSFP+ server connections with DACs or AOCs until server speeds need to be upgraded. Enterprise data centers that use structured cabling have traditionally accomplished 10 Gig server connections via copper twisted-pair cabling (i.e., 10GBASE-T). With 25GBASE-T switch technologies not readily available in the market, migration from 10 to 25 Gig with structured cabling requires a shift from copper to duplex fiber, which requires a complete overhaul of both electronics and cabling infrastructure. However, once deployed, multimode and singlemode duplex fiber structured cabling for the switch-to-switch uplink connections will support both 25 and 100 Gig downlinks to the servers.

You also need interoperability, meaning that the solution you choose should work with any vendor’s switch. Standards-based structured cabling is inherently interoperable with any vendor’s equipment, but when it comes to DACs, some switch vendors will produce a warning message when using third-party cables. Yet DACs from switch vendors are limited in length and color option and are often more expensive, which makes third-party providers an attractive option. A common objection to replacing switch vendor DACs with third-party DACs is warranty and support. However, when there is a failure, switch vendors almost always recommend replacing the DAC as the first step, which is a quick and easy process. It’s important to therefore select third-party DACs from vendors like Siemon that have tested their products to ensure compatibility across equipment from various switch vendors. Siemon also provides samples of their DACs for customers to ensure interoperability before they commit.

Support Always Matters

Last but not least, what good is any option if your vendor doesn’t offer support? The beauty of working with Siemon is not only do we offer proven high-performance SFP28 and QSFP28 DACs in half-meter lengths and multiple color options, as well as AOCs and structured fiber cabling for all flavors of 25 Gig and 100 Gig server deployments, but you also get an expert Data Center Design Services team to help you make the best choice to meet your current and future data center needs. And our dedicated technical sales, product, and engineering professionals ensure you have the options, proven performance, and logistics to get the right solution at the right time.

Learn more about high speed interconnects >>>

2022-06 HSI Cable Guide

  Category: Data Center
  Comments: Comments Off on Choosing the Right Infrastructure for Your Next Server Migration

Is OM5 Fiber a Good Solution for the Data Center?

By Gary Bernstein,

OM5 Fiber

OM5 FiberI created a blog on this topic back in April 2017…this content is updated with current standards and applications…but it is still very much true today…4 ½ years later…Make sure you work with people & companies you can trust that have your best interests in mind.

Wideband Multimode fiber (WBMMF) was introduced as a new fiber medium in ANSI/TIA-492AAAE, in June 2016. The ISO/IEC 11801, 3rd edition standard is now using OM5 as the designation for WBMMF. OM5 fiber specifies a wider range of wavelengths between 850 nm and 953 nm. It was created to support Shortwave Wavelength Division Multiplexing (SWDM), which is one of the many new technologies being developed for transmitting 40 Gb/s, 100 Gb/s, and beyond.

OM5 is being presented as a potential new option for data centers that require greater link distances and higher speeds. However, many enterprise IT and data center managers are increasingly adopting Singlemode fiber systems to solve these challenges.

So, what are the reasons a data center might consider installing OM5?

“OM5 offers a longer cabling reach than OM4.”

The difference is minimal.

For the majority of current and future Multimode IEEE applications including 40GBASE-SR4, 100GBASE-SR4, 200GBASE-SR4, 400GBASE-SR8 and future 400GBASE-SR4, the maximum allowable reach is the same for OM5 as OM4 cabling. There are only 3 current Ethernet applications that state an additional 50 meter reach with OM5. If a data center is using non-IEEE-compliant 100G-SWDM4 or BiDi transceivers, they would see a 150-meter reach with OM5 – only 50 meters more than OM4. For most cloud data centers, if they have cabling runs over 100 meters, they will likely use Singlemode for 100 Gb/s and greater speeds. Additionally, any installed OM5 cabling beyond 100m may be limited in its ability to support of future non-SWDM applications.

“OM5 will reduce costs.”

It won’t.

OM5 cabling costs about 30-40% more than OM4. In addition, if you look at the cost of a full 100 Gb/s channel, including BiDdi transceivers, the amount per channel is still 40% more than a 100GSR4/OM4 channel. The costs of Singlemode transceivers have declined considerably over the past 12-18 months due to silicon photonics technologies and large hyperscale data centers buying in large volumes. When comparing the price of 100 Gb/s transceivers, 100G-PSM4 using Singlemode fiber is the same price as 100GBASE-SR4 using Multimode fiber.

“OM5 is required for higher speeds.”

Not true.

All of the current and future IEEE standards in development for 100/200/400/800 Gb/s will work with either Singlemode (OS2) or Multimode (OM4). The majority of these next-generation speeds will require Singlemode. IEEE always strives to develop future standards that work with the primary installed base of cabling infrastructure so customers can easily migrate to new speeds. The latest draft of IEEE P802.3db standard includes 400GBASE-SR4 (a lower cost, less complex, more attractive alternative to 400GBASE-SR4.2) which will have the same reach for OM4 & OM5.

“OM5 will create higher density from switch ports.”

It won’t.

It has very been common for data centers using 40GBASE-SR4 and 100GBASE-SR4 to increase port density by breaking out 40 or 100 Gb/s ports into 10 or 25 Gb/s channels. If a data center manager decides to use SWDM4 or BiDdi modules with OM5 cabling, they cannot break out into 10 or 25 Gb/s channels. This is a major disadvantage of using this technology.

“Do the leading switch manufacturers recommend using OM5 cabling with their equipment?”

No, they show OM3 & OM4

Example from Cisco: “In 40-Gbps mode, the Cisco QSFP 40/100-Gbps BiDi transceiver supports link lengths of 100 and 150 meters on laser-optimized OM3 and OM4 Multimode fibers, respectively. In 100-Gbps mode, it supports 70 and 100 meters on OM3 and OM4, respectively.” Example from Arista: “100GBASE-SWDM4: Up to 70m over duplex OM3 Multi-mode fiber or 100m over duplex OM4 Multi-mode fiber”

Siemon does not see any good reason to currently recommend OM5 to large data center operators. For enterprise data centers looking at migrating to 40GBASE-SR4 or 100GBASE-SR4, OM5 offers no additional benefit over OM4. And larger cloud data centers are either already using Singlemode or planning to move to Singlemode in the near future for migration to 800 Gb/s and 1.6 Tb/s without changing out their cabling.

Learn more about Siemon’s Multimode and Singlemode solutions.

View webinar: Siemon TechTalk | What Are The Real Benefits of OM5?

  Category: Data Center, Fiber
  Comments: Comments Off on Is OM5 Fiber a Good Solution for the Data Center?

SFP DACs Bring Your Enterprise Server Speeds Well into the Future

By Ryan Harris,

2021-09-14-sfp-dacs

2021-09-14-sfp-dacs

There’s a lot of buzz in the industry right now about next-generation 400 Gigabit speeds that are being adopted by Tier 1 hyperscale data center providers like Google, Amazon, and Microsoft. Tier 2 and 3 cloud service providers are expected to ramp up to these speeds next year with large enterprise likely starting to follow suit in 2023 and 2024.

While 400 Gigabit speeds will eventually make their way into large enterprises for uplinks between switch tiers to handle increasing amounts of data, server connections are where bandwidth and latency need to keep up with e-commerce and emerging technologies like advanced data analytics, machine learning, artificial intelligence (AI), telemedicine, online banking, high-resolution video content, and other real-time applications. Thankfully high-speed interconnect direct attach cables (DACs) are keeping up with increasing requirements, ensuring that switch-to-server connections don’t become the weakest link.

They Get You Beyond 10 Gig

Most enterprise data centers employ server connection speeds of 1 or 10 Gigabit per second (Gb/s) using copper-based cabling with 10, 40, or 100 Gigabit fiber uplinks between switch tiers, with larger enterprises primarily running 10 Gig server speeds. 10 Gb/s server connections are achieved either using 10GBASE-T with category 6A structured cabling that supports up to 100 meter lengths or SFP+ DACs in direct short-reach connections from top-of-rack (ToR) switches where lengths are less than 7 meters.

While not an option for data centers that need to support longer distances, SFP+ DACs in a ToR deployment have become increasingly popular due to requiring less power per port and offering lower latency than 10GBASE-T. ToR switches with DACs typically use a lot less than 1 W per port, while 10GBASE-T switches range from 1.5 to 4 W per port. Latency with ToR and DACs is around 0.3 microseconds per links, while 10GBASE-T with its more complex encoding schemes is closer to 3 microseconds per link. A couple of microseconds might not seem like much, but emerging applications like high-speed trading and AI are increasingly demanding sub-microsecond latency. This makes DACs ideal for any current or future application where latency is a concern and where high port counts can add up to significant power savings. DACs are also easy to deploy-as a factory-tested and terminated solution they can be simply plugged in without the complexity of cable testing and multiple connection points.

As emerging technologies demand more bandwidth and lower latency, large enterprise data centers are now beginning to adopt 25 Gb/s server connection speeds. In fact, a recent five-year forecast report from Dell’Oro Group predicts that 25 Gb/s will gradually replace 10 Gbps for server speeds over the next five years. While 25 Gb/s can be supported using transceivers and duplex fiber connectivity (i.e., 25GBASE-SR), this is the most expensive option that is really only needed for very long switch-to-server lengths, which are extremely rare in enterprise data centers. Longer lengths that warrant transceivers and fiber cabling in the enterprise are typically only found in switch-to-switch links.

SFP technology has thankfully kept up with the need. SFP28 DACs that use the same form factor as SFP+ DACs support 25 Gb/s server connections-and the benefits of reduced power consumption, lower latency, and lower cost hold true at these speeds. In a comparison of power consumption for 500 server connections using SFP28 DACs verses 25GBASE-SR, the total wattage estimated with SFP28s is just 25 W vs. around 600 W for 25GBASE-SR.

At higher 25 Gb/s speeds, SPF28 passive DACs are however limited to about 5 meter lengths. While this still supports in-cabinet ToR switch-to-server deployments, data centers needing longer lengths can also look at SFP28 AOCs as an alternative to transceivers and fiber cabling. Able to support up to 100 meters and typically used for 30-meter link lengths and below, AOCs offer the benefit of less cost and power consumption than using transceivers with fiber cabling while offering smaller-diameter fiber cabling. For more information on the difference between DACs, AOCs, and fiber cabling with transceivers, check out our previous blog.

DACs Remain a Viable Option for Years to Come

The next logical migration in enterprise server connections will be 50 Gb/s, which will likely start to take hold as fiber switch-to-switch links increase to 200 and 400 Gb/s over the next 3 to 5 years. The good news is that emerging SFP56 DACs already support these speeds to a reach of 3 meters. While longer-distance 50 Gb/s deployments will need AOCs or transceivers with fiber cabling, enterprise data centers using a ToR approach are well positioned to support increasing server speeds with SFP-based DACs for several years to come.

Just as many of today’s enterprise data centers use QSFP+ to 4 SFP+ or QSFP28 to 4 SFP28 breakout DACs or AOCs to support 4X10 or 4 X 25 Gb/s server connections, 50 Gb/s server connections will also be supported using breakout assemblies. For example, a 200 Gigabit QSFP56 to 4 SFP56 DAC or AOC will support 4X50 Gb/s server connections. Time will tell, but as 400 Gigabit starts to enter the enterprise data center environment, we may even see the 8-lane QSFP-DD form factor used to support 8X50 Gb/s or 4X100 Gb/s server connections with up to about 2 meters supported by DACs and longer distances via AOCs.

With a full line of SFP+, SFP28, QSFP+, and QSFP28 DACs and AOCs, as well as multimode and singlemode fiber and all categories of copper structured cabling, Siemon can support current enterprise data center speeds for both switch-to-server and switch-to-switch links-no matter what length you need. And you can rest assure that we’ve got on our eye on the enterprise data center market and will be ready to support your future 50 Gb/s server connections with a full line of SFP56 DACs, AOCs, and breakout assemblies.

2022-06 HSI Cable Guide

  Category: Data Center
  Comments: Comments Off on SFP DACs Bring Your Enterprise Server Speeds Well into the Future