Information Technology certainly isn’t what it used to be, is it? Sometimes we long for the days when our biggest problems were a user’s inability to log in or their printers not working. Do you remember the first time you pulled a cable? Crimped a connector? Successfully authenticated to Active Directory? Delivered an e-mail to an inbox?
How about creating virtual machines (VMs)? That was awesome! Remember your first “P2V” and how cool it was to see more than one OS running on a single physical server? And how cool was it to see all your VMs sharing the same pool of storage!? Across the same “virtual” network!? Each of these questions represent an important mark in the last 20 years of IT history and the evolution of enterprise IT technology. These milestones seem to happen about every 10 years, and we have officially entered another one of these evolutionary cycles.
DevOps & the Era of the Application
If you think about it, it’s always been about the application. Everything else was just a necessity – a means to an end – to run the application. Setting up, running, and maintaining the entire underlying infrastructure consumed 90% of our staff, resources, and capital expenditure each year…all to run a suite of applications.
This was due to the complexity involved in infrastructure that led us to the massive consolidation that happened during the era of virtualization from roughly 2007 to 2013, and is still going to this day. There are still companies struggling to reach 100% virtualization, and the irony of this is that the usual limitation preventing a server from being virtualized is the application itself! Even though we’ve jumped this hurdle (for the most part) across the industry, it has hindered a lot of companies from moving forward with virtualization projects, especially for proprietary and custom applications that could not be updated to support modern infrastructure methodologies.
Even then, the application was the master. This delay has put so many companies behind the 8-ball for years trying to play catch-up. As the infrastructure evolved, so too did the various virtualization technologies and the underlying third-party virtualization ecosystem of storage systems, management suites, and business continuity and disaster recovery software.
The unfortunate side effect of this lag is that you end up running two or more sets of infrastructure in parallel. With disparate architecture, software, and licensing, running multiple sets of infrastructure to accomplish one set of goals introduces excessive operational complexity and expense. The number of things you have to keep track of compounds exponentially, so, even though virtualization moved things forward from a technology perspective, it greatly increased the complexity of running an IT department efficiently. As virtualized infrastructures scale up, we ultimately end up with a problem just as big as the one that led to virtualization in the first place.
It’s funny how all of this comes full circle, as technology begets more technology. As fast as things ramp these days, it can feel like a flooding waterfall from a broken dam of new software, tools, and hardware that seems impossible to stop. There is always a bigger, better, stronger, and faster tool, widget, or gizmo that “can fix problem XYZ for you!” But every one of those comes with their own set of requirements, demands, and dependencies to take advantage of the latest and greatest. It’s very easy to fall into the trap of the never-ending upgrade cycle. Just as you finish one set of upgrades, it’s time to upgrade and migrate something else. And on, and on…and on.
We’ve also begun to think about new ways of handling application development. A new way to get dependencies of infrastructure and red tape out of the way … to let coders just code. It isn’t some magical box or software you can buy, but more a mantra of methodology leading to a faster, more efficient way for developers to build and test software without having to involve change management cycles and infrastructure team approvals.
In 2008, at the Agile Toronto conference, Andrew Shafer and Patrick Debois gave it a name: DevOps.
It would be many years before DevOps became a mainstream buzzword used across the industry. IT teams and vendors were still very focused on virtualization, and another new technology and way of doing things outside of corporate IT was top-of-mind for everyone in the industry.
Preparing for DevOps and the Era of the Application
In a broad survey recently conducted by ActualTech Media, respondents were asked to rank their priorities for the next 12 to 18 months. The #1 ranked priority was to improve operational efficiency. This shows that companies are re motivated to eliminate it. As the survey shows, most companies today are still working to find a way to fully leverage DevOps and to refocus IT priorities away from legacy tasks to business applications. So, what can you do?
At Uila, we recently published a new book entitled The Gorilla Guide to … Application-Centric IT. In this free book, you’ll learn:
- The advantages of an application-focused approach to IT
- How application dependencies can simplify workload migration and resource planning
- Start the journey of developing a "full stack" mindset for managing applications
Subscribe
Latest Posts
- How Data Center System Administrators Are Evolving in today's world
- Microsoft NTLM: Tips for Discontinuation
- Understanding the Importance of Deep Packet Inspection in Application Dependency Mapping
- Polyfill.io supply chain attack: Detection & Protection
- Importance of Remote End-User Experience Monitoring
- Application and Infrastructure Challenges for Utility Companies
- Troubleshooting Exchange Server Issues in Data Centers
- Importance of Application Dependency Mapping for IT Asset Inventory Control
- Navigating the Flow: Understanding East-West Network Traffic
- The imperative of full-stack observability