DAC Cables Shine in the Cloud
DAC Cables Shine in the Cloud
In this edition of Voices of the Industry, Ryan Harris, Sales Engineer and Market Manager for The Siemon Company, explains why direct attach copper (DAC) cables are here to stay.
Today, there are many data centers deploying higher speed server cables, residing in the access layer of private, public and hybrid cloud locations around the globe. These server and storage connections are used to enable a wide variety of emerging services and applications, across various industries. Data centers with the fastest throughput and processing times have a digital advantage while industry standard network equipment and connectivity is leading the way and helping advance market changing innovation. New mobile cloud applications and tools are continuously emerging, keeping users conveniently connected to more digital services and unlocking best-in-class experiences, no matter where they are, all enabled by high speed cabling.
Some industries are already harnessing the advantages of having the quickest response times and large processing capabilities, such as: finance, SaaS, healthcare, manufacturing and more. New 400G network switches are being deployed linking to the network core and ultimately enabling faster end-point server speeds. There’s a lot of buzz in the industry right now about next-generation 400G and 800G port speeds that are being adopted by hyperscale data centers, cloud providers and some large scale enterprises. While 400G speeds will eventually make their way into more large enterprises for uplinks between switch tiers to handle increasing amounts of data, the server connections are where bandwidth and latency need to keep up with e-commerce and emerging technologies like advanced data analytics, Machine Learning (ML), Artificial Intelligence (AI), telemedicine, online banking, high-resolution video content, and other real-time applications. Thankfully, high speed direct attach cables (DACs) are keeping up with these increasing requirements, ensuring that switch-to-server network interface connections don’t become the weakest link.
Direct attach copper cables are known to have the lowest latency of all short reach connection options. This is due to measurable delays in optical connections that actively change copper signals to optical signals and then back to copper signals. Active fiber optics are not needed in high speed connections of 3 meters and below and when deploying 25G and 100G server connections. Plus, there is no need for Forward Error Correction (also known as FEC) which is used in high speed connections to detect and correct errors, placing additional latency delays. A DAC’s cable gauge (AWG) can help in overcoming the need for FEC, which is important as the public cloud is not a great fit for every organization. For example, financial and healthcare institutions, that are heavily regulated, often find it difficult to navigate compliance due to stiff security requirements that go with private client information; this alone can make a private cloud system the optimal choice.
Public clouds have become popular because they are great for enabling access to unlimited compute power when the typical application usage is unknown. However, when there is a consistent need for larger processing workloads, having a private cloud or hybrid cloud makes more dollars and cents. Large enterprise organizations that have use cases for 10 to 100+ cabinet systems typically find that operating a private cloud is more economical in the long run. In today’s data centers, having a system that has 100G down and 400G up is ahead of the pack. The 100G QSFP28 switches and server network interface connections are widely available and offer options to breakout for higher density 25G server and storage connections from the 100G ports and larger enterprise data centers are already considering the need for 200G down link port speeds, with the need for even faster speeds expected in the future.
When designing cloud systems, agility is a key consideration, and choosing the right cabling can help make your IT budget less cloudy. Many on premise and public cloud data centers utilize Top of Rack (ToR) server pods to deploy more compute when needed. The repeatable design enables the rapid deployment of server pods and allows users to easily scale at pace. Combined with direct attach copper cables, ToR systems can also be more economical as cloud server systems are known to be very power hungry and create a lot of heat.
In addition to the low price of direct attach copper cables, power consumption is another key consideration when it comes to cost. Many short reach connections are made in ToR systems and are ideal situations for using DACs. Passive DACs offer the lowest power consumption per port which helps to contribute towards wider data center power usage reductions as well as cooling costs. When making a short reach connection, direct attach copper is the popular choice; however, if you need longer distances, Active Optical Cables (AOCs) with their embedded transceivers can go the distance but consume more power than using DACs. Structured cabling with transceivers is the most power hungry of the three options, typically consuming upwards of 1.2 Watts per port for 25 Gig and 3.5 Watts for 100 Gig.
Furthermore, having short reach point-to-point cables within a cabinet makes transport connectivity troubleshooting easy. In the unlikely event that the direct attach copper cable fails, they are quick and easy to replace, you can simply use a known ‘good’ DAC cable to compare against as validation that it’s not a cabling issue. This is the typical procedure used to confirm a 3rd party cable is not the cause of a connection problem and the same procedure is used when calling the network equipment OEM for technical support; many network engineers don’t typically get the support they would expect with bundled cables, which also usually have a shorter warranty period than 3rd party cables. Using 3rd party Ethernet DAC cables offers a lot more options than OEM bundled cables. For example, color cable jackets that can be useful for troubleshooting and system mapping identification. Understandably, concerns about futureproofing your cloud systems can arise, which can be overcome with Top of Rack systems, using high speed point-to-point cables as they will continue to support switch-to-server connections at next-generation speeds beyond 100G.
The ability to support faster transmission speeds has a lot to do with binary encoding schemes used to convert data into digital signals. The most common encoding scheme, that has long been used in data transmission, is non-return-to-zero (NRZ). NRZ uses two different voltage levels for the binary digits, where positive voltage represents a “1” and negative voltage represents a “0”. This encoding has significantly evolved over the past few decades and is primarily used to support bit rates of 1, 10, and 25 Gb/s per lane in data center links.
PAM4 encoding offers twice the bit rate per the same signal period of NRZ, by using four voltage levels instead of two, and supporting 50 and 100 Gb/s signal channel rates without an increase in channel loss. For small form-factor pluggable technology, PAM4 now gives us single-lane SFP56 interconnects for 50 Gig and four-lane QSFP56 interconnects for 200G, which is the next speed jump for short reach server connections. PAM4 is also what enables 400G applications, the double-density 8-lane QSFP-DD form factor relies on the same PAM4 50 Gb/s bit rate to achieve 400 Gig (i.e., 50 Gb/s x 8 lanes), which is ideal for switch-to-switch deployments.
Unfortunately, the increased throughput offered by PAM4 comes at a cost. PAM4 requires FEC that adds latency, typically in the order of 100 to 500 milliseconds (ms). Due to the added latency of FEC with PAM4, the highest speed and lowest latency option is currently the 4-lane QSFP28 industry standard DAC, that supports 100 Gig using NRZ and a 25 Gb/s bit rate that does not require FEC up to 3 meters.
While most enterprise data centers are just starting to shift to 25 Gig server connections using single-lane SFP28 DACs, 4-lane QSFP28 DAC and breakout cables enable migration to high speed, low latency server connections to support emerging real-time applications. For applications such as high frequency financial trading, edge computing, interactive gaming, video conferencing, artificial intelligence, real-time monitoring and data analytics, any latency over 100ms can dramatically impact performance. In gaming for example, latency over 100ms means a noticeable lag for players. For data centers looking to support these applications, designing out latency in these switch-to-server connections should be a key focus, making QSFP28 DACs the perfect choice.
ToR switch-to-server deployments with DACs are here to stay and will get you where you need to go. When planning your next server and storage deployment, remember that not all cables are created equal and that it is important the cables selected are electrically tested above industry standard to ensure a worry-free high speed connection. Utilizing cabling that aligns to industry standards ensures interoperability, meaning that the solution you choose should work with any vendor’s switch within the same ecosystem, such as, Ethernet and InfiniBand. The key takeaway is that the introduction of PAM4 encoding technology enables DACs and AOCs to support 10 to 400 Gig direct links, including the lowest-latency 100 Gig option of QSFP28 DACs, that use NRZ encoding technology for emerging real-time applications, helping to make spending above your budget less cloudy.
Ryan Harris is Sales Engineer and Market Manager for Siemon. The Siemon Company delivers high performance IT infrastructure to data centers.
More >> DAC Cables Shine in the Cloud