The industry is now in the process of upgrading the 40 GbE network. So far, early applications have focused on cloud computing and large service providers, enterprises, education, and government data centers, but there will certainly be widespread applications in the future. As before, every Ethernet standard is driven by the development of IT technology. In some small data centers and network backbones, continuous improvement must require 40 GbE support.

The increase in server CPU processing speed, coupled with the emergence of chips supporting multi-processor cores, will continue to increase the computing power of the server. In addition, virtualization increases the fragmentation of the actual functionality available to the server. These factors can increase the network traffic transmission of each server. As a result, the newly installed server has been configured with a 10 GbE interface instead of the 1 GbE interface that is common to servers today.

There are already 40 GbE server and workstation interfaces, but they are not yet widely used. Current applications are limited to research institutions or other organizations that deal with very large data sets.

Incidental effects from network traffic and application architecture

Increasing server traffic can have a side effect on the entire network. Installing a server with a 10 GbE interface means that the interface of the access layer switch must also be upgraded to 10 GbE. Then, the uplink connecting the core network and other aggregation devices must also be upgraded to 40 GbE, and some may even be upgraded to 100. GbE network.

In addition, application fabric changes and virtualization also require increased network capacity between racks in the data center. In the past, a server typically only handled the execution of one application. In this case, traffic between the end user and the application data stream will pass through the network backbone, into the data center core network, and then to the server where the application resides. The traffic demand between the racks is not large.

Applications now typically contain multiple components. Each component can run on any server in the data center, so traffic between components is typically transferred between racks, which increases the bandwidth inside the data center. Once the network is upgraded to support this type of traffic, 40 GbE links are required to support current and future needs.

Virtualization and big data increase the pressure on network traffic

At the same time, virtualization also creates additional load. A virtual machine (VM) can quickly switch from one overloaded server to another with a smaller load. The target server may be on a different rack, so moving the virtual machine requires the entire image to be transmitted through the aggregation layer, or even through the core network. Assume that the application components running in the virtual machine are in the same rack before starting the move, and they may be moved to different racks after the transfer begins.

Big data will also increase the pressure. For example, technologies such as Hadoop split the processing load onto multiple virtual machines, each of which has a portion of the data. After each cycle is completed, the resulting data is transmitted to other locations for processing and optimization through the network. The total amount of data, the load on the network for the network, and the repeated movement of the virtual machine through the network all need to add a higher capacity link to the network. This process requires upgrading the network to 40 GbE.

If you want more evidence, consider the myriad of smartphones and tablets connected via 802.11n, 802.11ac or 4G networks. These devices can produce extremely high data capacity. All data must enter the data center through the backbone. As a result, large campuses, enterprises, and government networks are beginning to upgrade to 40 GbE or even 100 GbE networks.

Aggregate networks also increase load

In the past, network and storage data were transmitted over separate networks. The development of Fibre Channel over Ethernet (FCoE) reduces the cost of maintaining two networks—and the separate cabling and switches for both networks. FCoE enables stored data to share network traffic with Ethernet. Directly combining traffic that was previously transmitted over a separate network requires a higher capacity network, but storage traffic has higher requirements.

Storing data requires a lossless network, but Ethernet uses best-effort packet transmission. The newly developed IEEE standard for congestion control, bandwidth management, and traffic optimization level partitioning will allocate dedicated network capacity to storage data to ensure that there is no data loss. Providing dedicated bandwidth to the storage reduces the amount of network data available, which further increases the overall capacity requirement.

The upgrade process is still in its early stages

Despite the increase in bandwidth requirements, the 40 GbE upgrade schedule is also subject to a real problem: existing cabling must be replaced to support higher speeds. Organizations should not replace cabling directly, but should be cautious and add more connections to support traffic growth needs, even if they have already begun to do some basic work to deploy a 100 GbE network.

At the same time, 40 GbE deployment will continue. Just as the previous 10 MbE network has already met the requirements, then the 1 GbE connection between the rack servers is replaced with 10 GbE, and then it is definitely a 40 GbE upgrade.

RAM/RFM electric heating capacitors

RAM/RFM electric heating capacitors

Electric Heating Capacitor,Film Heating Capacitor,Electric Capacitor Bank,Induction Heating Capacitors

YANGZHOU POSITIONING TECH CO., LTD , https://www.yzpstcc.com

Posted on