Human error has come in as the number one cause of IT service incidents in Dimension Data's 2014 Network Barometer Report.
The report found just 16 per cent of 91,000 IT service incidents logged with Dimension Data service centres in 2013 were related to a device, while the other 84 per cent of incidents were related to non-device issues such as human error, telecom failures or environmental issues.
The largest category of IT incidents are due to human error, with nearly one-third of all incidents -- 6 per cent configuration errors plus 26 per cent other human errors -- are potentially avoidable.
According to the report, these statistics are worrying because a large proportion of incidents fall outside of a support provider's traditional remit, and must be resolved by organisations themselves.
First published in 2009, this year's Network Barometer Report was compiled from technology data gathered from 91,000 service incidents logged for client networks that Dimension Data supports.
Dimension Data also carried out 288 technology assessments covering 74,000 technology devices in organisations of all sizes and all industry sectors across 32 countries.
Telecom, or wide area network, failures came in as the second most frequent root cause (22 per cent).
Third on the list was physical environment problems such as loss of power, air-conditioning failures and temperature control problems.
These accounted for 15 per cent of all incidents.
In fourth spot, was device-related problems, with 14 per cent of all incidents attributed to hardware.
Further, two per cent of incidents were attributed to software bugs.
This indicated that only 16 per cent of all service incidents fell within the remit of device support contracts.
Dimension Data business development director for networking, Rich Schofield, said the latest data indicated that these service incidents were not device related and fell outside typical maintenance contracts.
"Therefore, they will need to be addressed and resolved by the organisation's internal support resources," he said.
"From a lifecycle perspective, one might expect the failure rate of obsolete devices to be higher than current or ageing devices. "That's because obsolete devices are older and maintenance options are limited."
However, this year's analysis shows that the failure rate of obsolete devices is around 1 per cent lower than either current or ageing devices.
Schofield said the company investigated how likely obsolete devices were to fail, when compared with current or ageing devices.
"We expected to uncover that obsolete devices would cause longer downtime when they fail than current or ageing devices," he said.
"We were surprised to find that the data indicated otherwise. In fact, the average mean-time-to-repair for all devices is 3.4 hours."
Broken down by technology lifecycle stage, the data shows that current devices take about 48 minutes longer to repair than the average.
Ageing devices take the shortest time to repair - about 42 minutes shorter than average.
While obsolete devices take slightly longer to repair than ageing devices at 3.3 hours,
Schofield said the most effective way for organisations to improve their network service levels and ensure maximum availability was to invest in mature operational systems and support processes.
"Knowing the devices and their lifecycle stages, having sparing strategies for obsolete equipment, and understanding the potential network impact if devices fail will support greater network availability."