Ensuring that our datacenter networks can deliver not only on first deployment, but also in the long-term means taking important steps to minimise the likelihood of unexpected issues, and their subsequent impact to customers.
This is a situation which many global service providers found themselves in this year with the global challenges around pandemic control and the shift in how users access their network. A spike in demand for streaming and teleconferencing services caught some out temporarily, especially when it came to access network capacity into our homes.
A challenging situation for every one of us, but interestingly, the core service providers in many cases appeared outwardly swan-like, with spare capacity in-hand to manage the extra load, being able to balance demand across their distributed datacenter footprint. This did not happen by accident of course, and countless man-hours will have gone into designing, installing, and testing every element of the network.
It is this final testing and assurance step which truly confirms that network delivery has been a success, but of course the proof is then in how it then matches the demands placed in front of it.
But let us take a short pause here and rewind to look at how today’s datacenter has changed.
Of more importance now for many, is the ability for an IT resource to be agile and flexible, able to grow and adapt on-demand based upon often short-term needs. This is highlighted by the shift in who is building these datacenter facilities today. Far fewer projects are implemented by the typical Enterprise business themselves, instead global IT has shifted significantly into the hands of the Cloud or Colocation Service Provider who offers scalable and reliable services for a simple monthly fee.
If we were to review how datacenter IT has evolved over the last 20 years or so, we can see that the technologies within the network have evolved significantly. Today, transmission speeds and availability figures are of course important, but thanks to the shift towards hyperconverged compute and highly interconnected mesh connectivity, workloads can be dynamically scaled and there is less of a dependency on the single critical location or deployment of IT delivering all of our data needs.
Added to this, operators are now shifting processing out of the core and towards the edge, which will, for many users and applications, deliver significant advantages in terms of application latency and data transportation costs.
So now we find datacenter and cloud service providers with a new topology, more distributed once again as opposed to the centralised one which they have been accustomed to in recent years.
Whilst this design does allow for greater resilience, it is not a get-out-of-jail-free card. It needs to be backed up by an infrastructure which consistently delivers at its greatest potential for the desired lifetime of the network, and one which can also rise to the challenge of the unexpected.
So, it has been clearly proven, both the datacenters and the connectivity which links them play a critical role in our 21st century lives. It is essential therefore that we test and prove our networks in advance and be able to respond quickly and decisively to any problems which might occur to minimise any impact on the services they are supporting.
VIAVI has therefore targeted goals to make life a little easier for the teams tasked with building and managing a datacenter network.
Firstly, through the simplification and automation of the infrastructure testing workflow, something that is often a lengthy and complicated process and where errors in testing can fail to highlight potential shortfalls in performance. Here our aim is to bring a greater level of consistency to the planning and completion of such tests, while enabling greater transparency to the overall project team involved.
Now the reality of the situation is that once deployed, there can be increased risk of ad-hoc or poor cabling management leading to damage or a reduction in performance of the optical network. For many operators this can be very challenging to identify and still harder to correct once traffic is in flow across their network. Therefore, our second goal is to help the operations team safely maintain the infrastructure during normal day to day change activities, monitor its physical condition or performance metrics for any indications of a drop-off and then enable them with tools to trace and narrow down where the problem might lie.
In this we have the support and close collaboration of our partners in this evolving ecosystem. Together we will deliver the very best solutions to our customers, so they can do the same for theirs.