5 mins read

How To Convert Your Datacenter To A Formula 1 Racecar

Nilesh Rane Associate Vice President, Product & Services, Netmagic

IT infrastructures have changed considerably over the Netmagiclast decade. Aside from the technology evolution (or should that be revolution?), the role that they are being asked to fulfill now demands more from hardware, software and applications. This, in turn, has had a significant impact on the role of average IT teams. A good way to think about it is to look at how motor racing has evolved over the last 50 years.

Think of your IT infrastructure as a racing car. Your datacenter is the chassis, the servers are your business’ engines, the power supply is the fuel and your business applications are the transmission that drives the business. Your IT team is the pit crew that makes sure your racing car is optimized for maximum performance. In the 1950s, the pit crew relied on only one source of data – the driver – to provide them with critical information about the car’s performance as it raced around the track. When the driver came in for a pit stop he would communicate to the crew what was wrong with the car – more air in the left front tire, change the fuel mix, adjust the back wing to increase down force, and so on.

This approach created a number of challenges for pit crews in the 1950s. The data they received was not very scientific. It was from a single source, based on subjective perceptions, and therefore lacked any real accuracy. As a result, the changes they made were just as likely to diminish performance by creating a new problem elsewhere on the car. This approach was very reactive – changes could only be made halfway through a race, rather than while the car was still on the grid – limiting their effectiveness on overall performance.

How does this relate to modern IT infrastructures? Consider the complexity of a modern Formula 1 racing car and the evolution in its performance since the 1950s. Modern IT infrastructures mirror this development. Businesses have focused on performance without tackling the complexity – akin to asking a 1950s mechanic to make changes to a modern Ferrari using the Haynes manual for his Fiat 500. Sound impossible? Effectively, this is what many businesses are asking their IT teams to do.

These teams are faced with 21st century infrastructures but don’t always have 21st century visualizations and data. Much of the change is backward-looking, fixing historic problems rather than making improvements for the road ahead, and it is fraught with danger. A seemingly insignificant change to a minor component can often bring the car sputtering to a halt, especially if the ‘crew’ is operating blind, unaware of the relationships and dependencies one part has to another.

What IT professionals really need is a forward-looking view that enables them to be proactive. In modern day racing terms we would turn to real-time telemetry to solve these problems. But how does a modern IT team fix such issues within the datacenter to create a high-performance business infrastructure?

A simple solution is to find an automated way to identify the hardware and software components in the datacenter, and map business applications onto them on a continuous basis. Add analytics to help navigate this information, such as reports and dependency graphs, and you have a way to accelerate and eliminate the risk in the planning phase of performance optimization while simultaneously tracking the program’s progress over time. Or, to put another way: telemetry.

Automated discovery and dependency mapping represents the modern racing car, with accurate and fast telemetry that feeds back to the pit crew. The driver can then focus on racing while the crew uses modern technology to continuously hone the car’s performance. This combination helps modern IT pit crews understand exactly what they need to change and provides extremely accurate near real-time data points, helping ensure the car is always optimized and delivering on its true potential. Without these accurate data points, it’s like trying to drive a 1950s car in a 2008 Formula 1 race.

This kind of telemetry also enables the IT team to make changes on the fly so there is no need to wait for the car to come into the pit. Potential problems in the infrastructure can be easily identified and avoided before they occur, and changes proactively made before problems arise. The driver can be confident that his/her IT team has reduced the risk of a breakdown and armed their driver with significant competitive advantage.

Many firms embark on ambitious datacenter optimization projects, only to realize that they lack fundamental data about their environment (a Haynes guide to their network). They also need something to turn the telemetry (masses of data) into meaningful information that can be mapped to the company’s business objectives. In the earliest planning stages of the project, it is critical to gain a complete map of the datacenter resources and the application dependencies – fast. It is also important to understand that no datacenter stands still; changes will occur all the time, causing configuration drift from that baseline. Think back to our racing analogy and this can be analogized to tire wear or fuel being consumed as more laps are completed.

Even when the IT department has gained a clear understanding of the application relationships to both physical and virtual infrastructure and the dependencies between them, by the time the move for an application or system finally occurs it’s highly likely that some portion of the topology or dependencies has changed. Today, physical resource information (such as hardware and software audits) is often still manually gathered – think of the 1950s pit crew – increasing the likelihood of error and the overall time required. Information about application dependencies tend to be gathered manually as well – a very time and resource intensive process. When recorded in this fashion, the overall quality of this crucial data is typically only around 50-70%.

When it comes to moving critical IT components, a failed move can lead to additional costs and missed project deadlines. Not having a single, automated view of application topology and dependencies in the datacenter can cause painful outages when moves occur, not only to the application being moved, but also to related systems and applications. The risks of application downtime as a result of not having an accurate inventory of assets or an understanding of their relationships will cause many organizations to only perform like-for-like application swings when executing a datacenter move. This minimizes the moving parts that could fail, but misses the opportunity to refresh and therefore optimize.

An automated approach to application discovery and dependency mapping will enable high levels of data quality – a minimum of 97%, which feedback from our customers’ dictates is the benchmark for success. This means many more targets can be identified and optimized, which translates into very real performance improvements and cost savings. Having a true understanding of all the critical dependencies within your infrastructure, and the ability to continually refresh this view, means that the impact of any change will be absolutely clear, significantly decreasing risk and making unintended impacts on business applications a thing of the past.

Often as many as 2.5 to 5% of servers still active in the datacenter could be removed with no impact on the business, and it’s estimated that another 10% of servers have no current usable function but are still taking up space and consuming power. Automated discovery and dependency mapping can give enterprises the accurate and constantly up-to-date infrastructure blueprint they need to put themselves in pole position, achieving cost savings faster, enhancing business agility and managing operational risk to bring provide Formula 1-racer IT and business service optimization.