The University of Auckland is installing Juniper QFabric network switching fabric to power communications to and from the new datacentre at its Tamaki Innovation Campus in Glen Innes. It will allow university staff and other users access to the full power of the university's high-performance computing cluster, part of the nationwide New Zealand eScience Infrastructure (NeSI).
At the same time, the university will install another QFabric system at its central city campus, for improved throughput, latency and scalability on that site too. The two centres will operate seamlessly together to provide a high-performance computing service.
The new Tamaki datacentre will take over from a second datacentre that is currently running as backup to the city campus site. At present both datacentres are in the city; within about 300 metres of each other, says James Harper, associate director of operations. This is clearly not ideal from the disaster resilience point of view, he says.
The Tamaki campus is about 8 km from the city and on a separate power grid.
The University of Auckland has a high-performance and heavily virtualised datacentre environment with approximately 2100 virtual machines that, together with the new NeSI supercomputer, demands dense 10 Gigabit Ethernet top-of-rack connectivity today, with a pathway to 100 Gb Ethernet in a few years.
"The Juniper fabric will be the only network infrastructure we have in the Tamaki datacentre," Harper says. "We have a fibre connection among our campuses. The new datacentre will be connecting to the other campuses via fibre too. It goes out to the internet and to our customers who are not based on the campuses."
The infrastructure being put into the new datacentre is a very different design from current network infrastructure, to cope with the much higher volume of work, Harper says.
"Because the university is funded by the taxpayer and grants, openness in tendering is very important," he says. "We had eight respondents and narrowed it down to a shortlist finally of two vendors" -- the other being Cisco. "The two factors that swung our decision towards Juniper were the single point of management; the QFabric is not a conglomeration of switches but one switch distributed among the datacentres, so there's one management point. There are no problems with dependencies or version incompatibilities; there's only one infrastructure for our staff to have to work with."
The other factor was "the sheer performance" of the Juniper system. "When you look at the throughput, scalability and latency, it was really in a different ballpark to anything else we were looking at.
"As part of the RFP [request for proposal] and evaluation, as well as the usual suspects such as myself and the [network] architect, we had on the evaluation group the people that would have to be working with [the equipment] on a day-to-day basis [for example] the leads of our network engineering team.
"We also had customer representation from the high-performance research community. Among the high-performance guys I think there would have been a minor revolution if we hadn't chosen QFabric. They took one look at the specifications and decided they couldn't live without it. They were thrilled with the idea of its ease of management.
"So it wasn't a decision made solely on price or RFP response; the customers who would be using it and the people who were going to be working on it are really the ones that drove the decision.
"They were obviously impressed with the throughput, scalability and latency of it, but the thing that really swung it for them was the predictability.
"We're a highly virtualised environment. Depending on where your virtual machine sits and what it's talking to, there can be a number of different paths from the source to the consumer of the information and each of those paths has a different performance characteristic. With QFabric, regardless of where in the network they were connected and what else they were talking to, the performance was going to be the same."
Installation was underway when CIO spoke with Harper in June. "The racks have arrived and we're putting in cabling now," he says. "We expect to start scenario testing on July 17 and we'll be putting the first production systems in on August 13.
"We'll be putting QFabric in and populating each of the racks in turn. We expect a staged uptake over two to three months."
The Juniper switching fabric will be used entirely for IP networking of servers, with storage area networks run separately, Harper says.
"We could see that QFabric was heading in the direction that if we wanted to use it in converged IP and storage we'd be able to," he says. "But we don't see ourselves doing that until we've deployed 40 Gbit connections to most servers and the 40 Gbit optics and network cards become price competitive. So it's nice to know it'll be there in the future, but we're buying products that have delivered that functionality now. For the foreseeable future it's going to be IP traffic only [on the Qfabric]."