Leading technology -- as a creator, manager, implementer, and business catalyst -- is no small feat even in the course of running IT or a business. Technology changes rapidly, and it often becomes increasingly complex. The problems and opportunities to which it is applied are equally variable, messy, and involved; the easy "just add automation" problems have already been addressed.
Technology leadership in its four key forms is at the heart of InfoWorld.com's mission, and the InfoWorld Technology Leadership Awards honor those who have been exceptional technology leaders over the past two years. No "we did it in six weeks" here -- true technology leadership spans constituencies and technologies, and it's often exemplified by projects months in the making.
[ Check out all of InfoWorld's awards. | InfoWorld's Eric Knorr explains how to be a modern CIO. | Be inspired and informed every day on the key technology trends and insights: Subscribe to the InfoWorld Daily newsletter now. ]
The TLAs have a broad mission to recognize two key shifts in IT.
First, deployment is no longer the main game for IT, even if it remains the bulk of effort spent. Instead, creating value through technology -- within IT, of course, but also by helping the business grow -- is where leadership matters. As technology increasingly permeates the business, IT is providing more businesswide inspiration. And not just the CIO or CTO; IT project managers, admins, architects, and the like are equally capable of contributing, so the TLAs now honor leaders regardless of title.
Second, technology is no longer the sole province of IT. Nearly every businessperson today has been using technology at work and at home for two decades, and most are more than passably familiar with a variety of computer technologies. Thus, limiting technology to the high priests of IT is untenable. But so is the notion that the business is simply a customer of IT; that too suggests a "father knows best" mentality. It's no accident that the main technology drivers of business change were pushed not by IT but by businesspeople in the past two decades: the PC, the Internet, cloud computing, mobile computing, and increasingly social technology. Thus, the TLAs look for technology leadership anywhere in the business, not just within IT.
The 2012 TLAs showcase such leadership across the business and IT, as well as across roles. IT professionals remain the heart of technology leadership -- no surprise to us, given the passion and creativity many technologists bring to the table. Our winners, selected by a panel of InfoWorld editors from nearly 120 nominees, fall into four categories of leadership:
-- Business management, which honors technologists who assert leadership in the business itself. This leadership involves technology, but it's less about the technology itself and more about driving business growth or innovation. The fact that the person is in IT is irrelevant; as with sales, marketing, finance, manufacturing, and so on, IT employees are first and foremost employees, and these technology leaders act accordingly.
-- IT management, which honors technologists who assert leadership in the realm of IT, typically around management and enablement of IT as a whole.
-- Technology creation/enhancement, which honors the creative side of technologists. Here, leadership is about vision and execution, setting a new course for technology, and coming up with novel approaches to make it happen. We don't honor vendors' creation of innovative products here (that's what our Technology of the Year Awards are for), though we do honor internal products created as a by-product of IT innovation, as well as broad technology innovation at vendors.
-- Technology deployment, which honors the most exceptional leadership in the types of challenges IT faces day in and day out (it's no surprise this category had the greatest number of nominations): designing, deploying, and maintaining the technology systems that the business depends on to succeed.
The TLAs have no set number of winners, nor need there be honorees in each category. We're looking for the best, period. (For details on the criteria and how to enter for 2013, go to the InfoWorld Technology Leadership Awards page.)
We've found it, as the 2012 Technology Leadership Awards winners show. We present them in alphabetical order within each category:
-- Jeff Perry, University of Kansas
-- Attila Bognar, U.S. Army Human Resources Command
-- Laynglyn Capers, UPS
-- J. Wolfgang Goerlich, Munder Capital Management
-- Baraa Khamis, Abu Dhabi Executive Affairs Agency
-- Mihir Shah, Fidelity Asset Management
-- Nick Ganju, ZocDoc
-- Lincoln Wallen, DreamWorks Animation
-- Steve Hamby, Orbis Technologies
-- Kate Miller, Internal Revenue Service
-- John Schanz, Comcast Cable
Jeff Perry, deputy technology officer, University of Kansas
The University of Kansas faced a challenge familiar to many organizations: an explosion of stored data. But at the university, that exponential growth was happening independently in dozens of separate facilities managed by separate departments in the tradition of academic fiefdoms. The result was wasteful spending on both the storage itself and support costs like maintenance and cooling, an inability to integrate data where useful, and a failure to handle compliance or security requirements in a consistent way. Worse, faculty and staff members are mobile workers, doing their job at multiple locations on, off, and between campuses, yet the decentralized data storage reality did not facilitate connectivity to their data when outside of the office.
The answer was to replace that morass of storage islands with a subscription-based, multitiered storage solution. Then the program director for enterprise infrastructure and operations, Perry was charged with not only creating the service, called CFS (Central File Storage), but selling it to those fiefdoms. "Centralization of anything at a university can cause a sense of fear of loss of control caused by a lack of trust," notes CIO Bob Lim.
In other words, it was a business management challenge at least as much as it was a technology one.
Perry stepped out into the various departments and personally secured buy-in from multiple stakeholders, and he spoke at town hall meetings to introduce the idea and gather feedback. "It may be cheaper to buy a hard drive at Best Buy, but that doesn't provide you with backup, oversight, or data security. Jeff did a great job explaining the actual short-term and long-term costs of decentralized data storage and the significant value of the CFS solution," says information resource specialist Brett Gerstenberger.
Perry also marketed the idea that the Central File Storage service required no change in behavior for users. "We focused our efforts on fixing the economic problem, fixing the access and data security problems, and fixing the other problems without forcing customers to change processes or learn new technologies. This makes it so teachers can teach, researchers can research, and data managers can manage data," Perry says.
The result: CFS was up and running in less than a year, despite the cultural change required to get it first accepted, then actually used.
CFS has been fully operational for nine months, and the majority of campus now uses it, exceeding 23TB of data. Gerstenberger notes, "Since implementation, I've heard from users who are so relieved to not have to worry about backups and day-to-day maintenance. For many departments, the cost savings have already been huge." And users can now share information, regardless of their department, which has greatly increased opportunities for collaborative work and provided additional mobility, he says
Attila Bognar, chief of the project management division, U.S. Army Human Resources Command
It's the kind of project that could destroy many IT careers. As part of the U.S. Army's 2011 base closure process (identified by the separate Base Realignment and Closure Commission), Attila Bognar had to consolidate and integrate three HR commands and their data centers from three separate bases to one new location at Fort Knox, Tenn. The HR IT portfolio is one of the largest such portfolios in the Army, with 300 systems and 900 interfaces spanning all parts of the government and serving 2.5 million soldiers worldwide.
But consolidating three data centers into one at a new location was just part of the challenge. The total HR IT workforce headcount had to be reduced by 30 percent, and the reduced staff at the new location had to be hired essentially from scratch. The Army would move just 10 percent of the workforce to Fort Knox; everyone else would be a new hire. Plus, the existing systems had to be migrated into a new infrastructure managed by an outside contractor with rules and policies that were unlike those at the three original sites.
And, oh yeah, despite the changeover of pretty much everything, it all had to keep working.
The rationale was clear: The consolidation and changeover would save the Army a lot of money and render the HR systems more capable, efficient, consistent, and flexible. As the chief of the project management division for the Army's HR command, Bognar had to make it happen.
To do so, he incorporated agile techniques to meet quick-turn system sustainment and maintenance tasks during migration. He did this despite having inexperienced personnel at the outset due to the relocation of the command to an area where IT candidates were in short supply. Bognar also reorganized the HR IT organization into a product line rather than a matrixed organizational structure, allowing for more personal contact with customers and consistency of support while maintaining the flexibility necessary to maximize a small labor pool. Those changes in project and organization management are what the Army credits for Bognar's success in making the transition work.
The Army is now replicating what Bognar did in other data center consolidation projects.
Laynglyn Capers, vice president for information services, UPS
The logistics industry has traditionally focused on the needs of the shipper, and the recipients of all those packages have just been given better access to their packages' progress -- but with no way to manage their end of the delivery. UPS saw the opportunity and created tools to help residential receivers control and manage their incoming package deliveries.
But making it happen was no mean feat, discovered Laynglyn Capers, the vice president for information services charged with the receiver-facing software effort. Remember, most of UPS's customer-facing systems were designed for shippers, so connecting software designed for use by receivers required significant modifications to many existing IT systems, involving both IT and business stakeholders. In all, more than 60 applications were either modified or built to support the new tools.
To reach this goal, Capers established the program management office, and he oversaw both the development project and the interactions with the other UPS business units affected. It's what IT managers are supposed to do, of course, but it's not easy to do well -- especially with a new kind of software product and all the market and internal unknowns created.
The result was UPS My Choice, a paid subscription service that allows Web and mobile access for users to see their incoming UPS home deliveries at any time, choose delivery preferences, reroute shipments, adjust delivery locations and dates as needed, and add special delivery instructions visible only to UPS. As of April 2012, UPS My Choice has well over 1 million subscribers.
J. Wolfgang Goerlich, information systems and security manager, Munder Capital Management
Financial services provider Munder Capital Management had decided on an aggressive growth strategy requiring a world-class IT infrastructure that had to respond quickly and provide a platform for growth. But J. Wolfgang Goerlich, information systems and security manager, faced a problem in executing that business strategy: The existing systems -- the applications, servers, storage, network, and the data center itself -- were old and unable to scale. Worse, they were consuming so much maintenance resources that his team couldn't spend the time needed to develop forward-looking applications and services.
The infrastructure that absolutely had to be world-class was, in fact, in a precarious decline.
Goerlich responded with two initiatives:
-- Under a "one team, one system" plan, he restructured the software development and network operations efforts. Often called devops, this method increased the team's productivity by building cooperation, coordination, and alignment.
-- The "not a cloud in the sky" initiative was meant to revitalize the applications, servers, and storage. Using the principles created by infrastructure-as-a-service providers, he restructured his IT services to gain the best of both on-premise and cloud computing, reducing the number of servers from 137 to 43 and the number of custom applications from 317 to 272.
As a result of these two initiatives, the six-person IT team then outsourced the physical aspects of IT and built the needed modern infrastructure -- without raising the organization's budget for technology. Yet the team has 20 percent of its time allocated for improving the team and building new competencies, to keep it forward-thinking.
The IT team's metrics underscore the result of the management change: Pushes to production were increased from monthly to daily (42 on an average month), yet failed changes decreased from 17 percent to 4 percent, application feature deployment periods decreased from 45 days to five days, and server deployment periods decreased from eight hours to 30 minutes.
Baraa Khamis, IT and security manager, Abu Dhabi Executive Affairs Authority
The Executive Affairs Authority of Abu Dhabi formed in 2007 as an independent government body to formulate, incubate, and implement strategic policies under the crown prince. The EAA started off owning all its technology assets, which proved to be difficult to manage as the authority grew and took on more initiatives. A year ago, the EAA outsourced its entire IT operations as a managed IT service. Internally, IT became focused on technology strategy and security.
As manager of that strategic IT group, Baraa Khamis decided to remake the technology platform into a forward-looking one rather than perpetuate the traditional IT environment for users. Although still managed by the outsourcer, the direction comes from Khamis' group. A couple initiatives were common to many leading organizations: Migrating more than 40 physical servers to a virtual infrastructure and creating a remote disaster-recovery site.
In a post-PC world where users are no longer anchored to a specific PC at a specific location, Khamis decided that the design of the technology environment had to be very different than what companies have been building, whether internally or through outsourcers. The goal: "Users would be able to leverage the flexibility and mobility that the infrastructure provides and access information wherever they are and even take their extensions with them while they are in a business trip."
That led to a couple novel changes in the EAA's technology approach: It decommissioned all laptops and desktop PCs in the organization and replaced them with tablets running virtual desktops, and it eliminated the use of passwords in favor of a unique, easy-to-use two-factor authentication approach. To do so, Khamis developed a security strategy in which information is saved in the EAA's private cloud that has secured access, rather than worrying about securing the user devices.
Khamis notes, "Our model is all about providing information anywhere, any time, and using any device by using desktop virtualization and application streaming, so information never gets saved on the user device," eliminating all the idiosyncrasies and complexities of protecting those various devices. The EAA also gives users isolated environments (red/blue networks) that keeps information safe and doesn't allow it to transit to less trusted networks, for those users who have especially sensitive information.
Mihir Shah, CTO, Fidelity Asset Management
Fidelity Investment Management Technology (FIMT) is a team of 1,500 professionals providing technology services and solutions for multiple divisions within Fidelity's Asset Management Business line.
Hired into the new CTO position at Fidelity Asset Management, Mihir Shah found every Asset Management division had technology teams working in silos with minimal collaboration. Every technology team was developing applications and data sources from scratch, and there were no common standards or reuse across the divisions. That siloed structure had resulted in a complex technology environment and lack of business synergies. For example, the multiple data repositories for reference and market data provided a holistic view of the investment management business process, applications duplicated functions across all divisions, and a jumble of platforms meant huge maintenance and integration costs for no benefit while slowing the division's ability to take advantage of new business opportunities.
Shah's strong conviction was that the business-agnostic solutions should be built as common services used by all the business initiatives across the organization. In the new CTO role, he drove the development of a Fidelity Asset Management PaaS (platform as a service) to provide common development tools, infrastructure components, and reusable frameworks for all application development efforts.
The real work was centered not around technology, but instead significant organization change management, a shift supported by the CIO that created a cross-business-unit shared architecture and technology services organization. Shah created the blueprint for this shared function that laid out the architectural principles, the technology road map, and the business-enabling services that PaaS would offer. He then conducted road shows across the business and IT organizations to evangelize the concept and earn buy-in, and he continued to promote its benefits through incremental delivery of its value, so people could see results even as the transformation was in progress.
Now, Fidelity Asset Management has increased both operational efficiency and business flexibility.
Nick Ganju, CTO, ZocDoc
As the United States continues to wrestle with the fundamental approach to its health care system, a series of laws already are forcing providers to use electronic media records and to cover 32 million new patients in a strained, entrenched system. Worse, a severe doctor shortage is on the horizon.
For Nick Ganju, CTO at ZocDoc, that scary scenario was impetus to help empower patients by increasing access to information and removing inefficiencies from the marketplace that could level the playing field to work more in patients' favor.
Typically, U.S. patients wait an average of 20 days from the time they book an appointment from the time they see a doctor. Much of this delay can be attributed to an archaic appointment system that doesn't provide an accurate or transparent look at a doctor's current schedule. Plus, about 15 doctors' appointments are canceled at the last minute, and most canceled appointments go unused -- causing doctors to lose revenue and keeping patients who would have otherwise grabbed the opening to continue waiting weeks for their turn in the examining room.
Ganju took on a daunting project: Aggregating doctor data (including real-time availability) into a single location that would allow patients to quickly search for a local doctor who fits their needs and instantly book an appointment online. Accomplishing that task meant ZocDoc had to integrate thousands of medical practices -- all of which use a variety of disparate (often antiquated) practice management systems -- to allow patients to search and access these doctors' schedules. In many cases, to make the integration work, the team had to reverse-engineer binary storage formats from archaic software. Plus, it had to create algorithms that could generate results for complex, multivariable queries at a high enough speed to support millions of simultaneous user hits.
As a result, ZocDoc can facilitate thousands of exchanges per minute among its 1 million registered patients and the independent doctors and practices on the ZocDoc roster, with 7 million appointments managed at any given time.
Lincoln Wallen, head of R&D, DreamWorks Animation
A well-known axiom of the animated film world is that in each successive animated feature, the total number of rendering hours grows, regardless of the fact that processing power increases substantially with each hardware generation. Animated moviemakers quickly take advantage of that extra processing power to build more elaborate character interactions, richer textures, better special effects, more realistic lighting, and other creative improvements into each frame.
DreamWorks Animation has its own perspective on this phenomenon. According to "Shrek's Law," for every "Shrek" film made, the amount of compute power required to render it has doubled. Moore's Law, originally forecast by Intel co-founder Gordon Moore in 1965, reliably predicts that the transistor density of processors doubles approximately every two years. But for some time now, Moore's Law has resulted not in higher clock speeds but more cores on each chip. To make software codes run faster on so-called multicore chips requires re-engineering of the codes themselves -- a daunting task.
And that daunting task was one Lincoln Wallen decided to attack head-on while head of R&D at DreamWorks Animation. Wallen started a program called the NextGen Project to re-engineer key parts of the studio's proprietary software toolset, in partnership with Intel, to take advantage of the multicore capabilities in the latest processors. The engineers at DreamWorks invented an architecture that allows processes to scale to exploit the cores on a single machine and to scale processes from that machine out to the data center, pulling in many more cores. This new software architecture uses distributed scheduling at the platform and data center level to intelligently take advantage of available resources in the infrastructure.
When applied to animation, DreamWorks says the results have been nothing short of revolutionary, allowing animators to work with full-resolution, fully deforming characters in real time. Applied to lighting and rendering -- the process of painting the final frames of the movie and the most computationally complex process at DreamWorks -- it led to innovation in data orchestration and management, new interface techniques for controlling and directing the large amount of processing required to generate images, and faster render times.
For DreamWorks Animation, these developments mean huge improvements in quality and efficiency in forthcoming films like "Madagascar 3: Europe's Most Wanted." Plus, the use of NextGen has helped lower the costs per film by $6 million out of the usual $130 to $150 million budget.
The design principles led by Wallen have implications far beyond the animation industry. They could enable significant efficiency gains in the fields of digital design and digital manufacturing.
Steve Hamby, CTO, Orbis Technologies
Some parts of the federal government have amazingly advanced technology. Many parts don't. As a federal contractor, Orbis Technologies CTO Steve Hamby led a project to modernize how a Defense Dept. system collects, processes, and disseminates tactical intelligence. The infrastructure is dated and expensive to maintain, requiring more than $1 billion each year to run. The processes are written in notebooks or typed in documents on individuals' workstations -- following an undocumented process. The lack of integration of the various subsystems means that operators rely on Microsoft Excel and instant messaging to share information, tools that are not well suited for the task, much less formally part of the process.
Then there was the political wrinkle: The Fortune 500 system integrators that provide the subsystems prefer them to be "black boxes" isolated from each other, as that lets them command more money to support the program. The integrators were none too happy to see their black boxes opened up and the processes figured out and rationalized -- especially by a small technology services contractor.
With persistence and ongoing engagement with the Defense Dept. personnel who operated the system, Hamby's team developed a technology management plan to mature the existing "buzzword compliant" service-oriented architecture, implement enterprise technology management software, and reduce the integration and deployment costs. In the first year, the plan achieved more than $100 million in hardware, software, and integration costs savings, with another $150 million in savings expected over the next four years.
Some of that savings comes from an IT modernization and process management effort. For example, Hamby's team migrated the system from an expensive Solaris Sparc environment to a Windows and Linux private cloud environment using commodity hardware. The rationale for the Sparc environment was to ensure redundancy in each hardware component, and Hamby's team was able to achieve the same redundancy through a cloud architecture that spreads redundancy across hardware components rather than embed redundancy in each component -- at a fraction of the hardware and integration costs.
The adoption of business process management tool, preceded by deriving the actual processes to be automated, let the operators use an industry-standard notation so that they can measure their process effectiveness, efficiencies, and inefficiencies, and possibly improve their processes without program office involvement.
Kate Miller, associate CIO for applications development, Internal Revenue Service
The Internal Revenue Service collects more than $2.7 trillion in annual revenue and processes more than 230 million tax returns. To do this, the agency relies on its information technology systems that over the years became more complex. Legacy core tax processing systems developed in the 1960s, huge transaction rates each year into the billions, and an increasingly large, complicated, and ever-changing tax code exemplify the intricacy of the IRS IT environment.
The legacy drag was stupefying: Tax data was stored in flat, sequential files rather than in a database. Assembly language was the predominant language for Individual Master File (the master database) processing and financial systems. And the systems ran in weekly batches. These conditions had been in place for more than 50 years, along with obsolete practices not found or expected in current IT environments.
Modernization of the IRS's IT systems began in 1998, but progress was slow and full scale not yet attained a decade later. So in 2008, IRS Commissioner Douglas Shulman set a new direction and faster pace for technology changes. A new program launched that would pervade organizational boundaries and change traditional operating models. Named the CADE 2 (Customer Account Data Engine 2) Program, the 2012 filing season marked its delivery target.
Previous efforts to modernize IRS system produced only incremental progress, so strong resistance confronted the idea that change was possible. Dramatic changes in human resources and organizational processes became critical to the program's success.
The technology challenges were nothing to sneeze at either: Building a new database was a key challenge -- not in terms of size, as larger databases do exist, but the combination of size needed to accommodate 140 million individual taxpayers each year with inevitable changes that affect that number, plus the complexity of the shifting tax code. The variations affected more than 70 systems, roughly one-third of all tax processing applications.
Under extremely short timeframes, a 2,400-person organization led by Kate Miller, associate CIO for applications development, stood up to build the new database, completing its design, development, and testing all in parallel with normal preparations for the 2012 filing season.
Other elements of the new program changed the culture of IRS processing, moving from weekly cycles to daily cycles -- and introducing the notion of overnight and weekend shifts in an organization that was strictly a 9-to-5 shop. The testing methodology also changed to be more integrated across the entire system, and the IRS staff had to learn many new commercial tools brought in as part of the modernization of the base platform. The program also used a program management office for which the application delivery team became a key supplier -- a novel approach in the IRS, where pipelined delivery from one team to the next was the norm, rather than partnerships.
Faced with unusual circumstances, Miller asked her employees to embrace an unprecedented cultural transformation, motivating hundreds of employees to change their thinking and conduct business in a different way.
In January 2012, the CADE 2 program for the first time delivered daily tax-return processing for all eligible individual taxpayers and in March 2012 delivered a fully populated database for all individual taxpayers. Since go-live, systems have run flawlessly in production, experiencing 100 percent on-time performance. On March 22, the CADE 2 program achieved another significant milestone when the initialization of the relational database in the production environment was completed -- more than one week ahead of schedule -- and balanced to the penny with the Individual Master File. By April 5, the new system had processed more than 1.8 billion transactions and issued 83 million refunds totaling $229 billion.
John Schanz, chief network officer, Comcast Cable
At Comcast Cable, the technology approach used to rely on disparate regional networks and systems. Launching new products and features required duplicated efforts across these multiple networks due to the lack of a cohesive, companywide shared services platform. Talent in place was similarly decentralized, and skill sets were skewed toward cable-technology strengths rather than Internet Protocol expertise. As a result, upgrades to video, voice, and high-speed Internet service took many, many months to accomplish at rapidly escalating cost. Yet customers wanted those services at an advanced level.
As chief network officer, John Schanz runs the company's National Engineering & Technology Operations unit -- a 3,400-person team charged with the IP-based platforms. He "Webified" Comcast's networks, technologies, and operations, serving as the strategic architect and designer of the Comcast backbone, regional area networks, and content delivery network to enable nationwide IP connectivity. That network carries more than 4TB of traffic at peak.
Schanz created a private cloud computing network by using IP technology in Comcast's data centers and infrastructure. He also pushed the IP core to the edges of the network, for next-generation content distribution systems that store and deliver hundreds of thousands of video-on-demand titles on a series of centralized library servers that can be accessed by local servers closer to customers, as well as video streaming to iPads, iPhones, Android, and Xboxes. Comcast serves 20 billion video-on-demand downloads -- outpacing Apple's iTunes service.
Schanz's team also used IP for its voice services, resulting in a Skype service on television and readable voice mail in early 2012. The company recently launched a monitored home security service using that IP network as well.
A third element in the "Webify" strategy was to build APIs to back-office functions that had traditionally been static, proprietary, and difficult to change, to make them more flexible going forward. As the expectations of what a cable company delivers changes, Comcast believes Schanz's "Webification" transformation will let it meet those expectations.