I'm looking at the architecture of motherboards. Looking at standard workstation motherboards i've found that the older ones use a DMI (10Gbps) connection between the southbridge and northbridge chips. The newer motherboards use a DMI 2.0 (20Gbps) for the connection between the two bridges. Both DMI and DMI 2.0 use a 4 lane link.
So i was wondering, do server motherboards use the same DMI/DMI 2.0 connection between these two bridges? And if so, are there only 4 lanes like on standard workstation motherboards or do they increase the number of lanes or parallel them up to increase the bit rate? If they don't use the DMI/DMI 2.0 connection, what connection do they use?
Some chip manufacturers integrate the north and south bridges into a single chip; in this case, the motherboard will have just one big integrated circuit.
Or, depending on the CPU architecture, it may require only the south bridge chip.
Current Intel CPUs have an integrated memory controller and an integrated PCI Express controller, meaning that these CPUs have an integrated north bridge chip; therefore, they don’t require this chip on the motherboard.
Chipset manufacturers now use a dedicated high-speed connection between north and south bridges and connecting the PCI devices to the south bridge. Standard PCI slots, if available, are connected to the south bridge. PCI Express lanes can be available on both the north bridge chip and the south bridge chip. Usually, PCI Express lanes available on the north bridge chip are used for video cards, while the lanes available on the south bridge chip are used to connect slower slots and on-board devices, such as additional USB, SATA, and network controllers.
Currently, Intel uses a dedicated connection called DMI (Direct Media Interface), which uses a concept similar to PCI Express, with lanes using serial communications, and separate channels for data transmission and reception (i.e., full-duplex communication). The first version of DMI uses four lanes and is able to achieve a data transfer rate of 1 GB/s per direction (2.5 Gbps per lane), while the second version of DMI doubles this number to 2 GB/s. Some mobile chipsets use two lanes instead of four, halving the available bandwidth.
AMD uses a dedicated datapath called “A-Link,” which is a PCI Express connection with a different name. “A-Link” and “A-Link II” use four PCI Express 1.1 lanes and, therefore, achieve a 1 GB/s bandwidth. The “A-Link III” connection uses four PCI Express 2.0 lanes, achieving a 2 GB/s bandwidth.
I appreciate the reply but it doesn't answer my question at all. Everything you've copied and pasted is summed up in my first paragraph before i've actually asked the question.
south bridge chip are used to connect slower slots and on-board devices, such as additional USB, SATA, and network controllers.
servers will have SATA connections and network connections these are slower than the DMI or A-link connection to north bridge / cpu therefore speed of the link is not a problem and will use 4 lane DM or A-link 111.
A server only needs 4 HDD's connected up for the DMI 2.0 (20Gbps) to be max'd out which isn't a lot really. This doesn't take in count any of the other bandwidth require for the system to run. Surely a server should be able to hold this many harddrives without an issue.
Think about read write speed of drives- again much slower so data can be written to several drives at once without getting anywhere near bus speed.
Yeah, SATA III is rated at 6Gbps. 4 HDD's x 6Gbps = 24Gbps. Which is 4Gbps greater than the DMI 2.0 (20Gbps) bus can handle.
As you should be able to tell by now, i'm after facts and not opinions with this question. Ie. Are server motherboards like normal work station motherboards, where they're designed with the same DMI/A-link 2 link between the chipset? Yes/No. And if No, what connection do they have instead.
Have a read of this pdf chapter 2 may help answer your question
This thread is now locked and can not be replied to.