Twenty or even ten years ago, we had specialized devices in our lives that each handled a specific function. A cell phone. A camera. An alarm clock. A portable CD player or digital music player. A set of paper maps or a GPS. The smartphone has absorbed all of these functions and much more.
Virtualization has in some ways had a similar impact on the datacenter in just as short a time period, but it’s harder to change enterprise systems than consumer devices. Storage in particular has struggled to adapt. Traditional storage arrays were developed in the 1980s and 1990s, and were designed to map logical storage objects such as a LUN or volume directly to one monolithic application.
Virtualization breaks that model, which in turn has significant implications for storage performance monitoring. Traditional storage monitoring tools are usually platform-specific, designed to provide deep insight into the storage platform itself -- disk and controller latency, throughput, read or write errors, component health, capacity, fragmentation and more. It’s also where storage administrators manage storage configuration and tune performance.
Storage monitoring and management solutions, like application performance monitoring (APM) and network performance monitoring (NPM) solutions, are naturally focused on the storage and not on the bigger picture of what is happening across the datacenter.
At Uila, we work with customers who are looking to improve monitoring of their virtual environment. In the modern software-defined datacenter, there are five things we have found that traditional storage monitoring and management solutions don’t tell you:
1. How applications map to the underlying storage
Some storage monitoring tools, like their NPM counterparts, have developed to include some application-awareness or virtualization-awareness (for example, through VMware’s vVols APIs). What is missing is a bigger picture of how the overall storage infrastructure maps to applications, particularly with multi-tier applications.
2. Which applications are affected when there are storage performance issues
That bigger picture is important. If there are storage latency or throughput issues, it can be hard to tell exactly which applications and users feel the impact. That makes root cause analysis a much more drawn out, painful process.
3. How much throughput each application is using over time
Storage managers, like their peers who oversee networking, want to know which applications are using up resources -- capacity and throughput (or IOPS). But storage monitoring tools look up from the storage, and don’t offer visibility into how much demand each application puts on the storage.
4. How application traffic flows across the network to storage
Networking and how application traffic moves between servers and storage is another variable that affect performance. But storage managers almost never have any information on this. It takes manual coordination and time to gather statistics from the networking team.
5. If storage isn’t to blame
Ten years ago, networking managers would have told you they were always blamed for performance issues. Today, everyone often assume storage is to blame. If you are the storage manager and your storage dashboard is green but everyone is still pointing at you, how do you prove it isn’t the storage?
Modern storage platforms have helped to a degree since some of them designed to work with virtualization, but even these are limited in their ability to monitor, detect and correlate problems beyond the storage infrastructure. They’re also not universal. Many applications and virtual machines continue to run on traditional arrays.
How Full Stack Visibility Helps Storage Managers
Full Stack visibility solutions like Uila give IT infrastructure teams a top-to-bottom view of what’s happening in the datacenter. For storage managers, that brings a number of benefits:
- Application visibility for storage operations. Just being able to see application performance and know when there are potential problems can help storage teams get ahead of things.
- Storage performance visualization with application context. Understanding how applications and storage interact and which applications rely on specific storage objects (LUNs, volumes, vDisks, etc), can be extremely helpful.
- The ability to easily exonerate storage when it isn’t the problem. With a shared view of the infrastructure (not just storage), storage managers can quickly exonerate themselves and help solve the root cause of any issues.
- Network and storage flow analysis. With visibility into network traffic flowing to and from the storage, teams can more easily identify hotspots.
At Uila, we believe anything that impacts application performance should be managed, monitored and optimized for the environment. Storage monitoring solutions will provide detailed insights issue once you’ve detected a problem, but you need a much higher level view of the data center to monitor everything effectively.
Please get in touch if you’d like to see a demo or run a complimentary trial of Uila’s solution - and fly overhead together!
Subscribe
Latest Posts
- How Data Center System Administrators Are Evolving in today's world
- Microsoft NTLM: Tips for Discontinuation
- Understanding the Importance of Deep Packet Inspection in Application Dependency Mapping
- Polyfill.io supply chain attack: Detection & Protection
- Importance of Remote End-User Experience Monitoring
- Application and Infrastructure Challenges for Utility Companies
- Troubleshooting Exchange Server Issues in Data Centers
- Importance of Application Dependency Mapping for IT Asset Inventory Control
- Navigating the Flow: Understanding East-West Network Traffic
- The imperative of full-stack observability