
How to drive trusted decisions without changing your current data infrastructure.
Learn more about DataOS® in our white paper.
Technical debt is an ongoing issue no one should expect to square away because as technology advances, even today’s top systems will eventually achieve full “legacy” status. However, if you don’t keep on top of it, technical debt will eventually cause significant damage to your pocketbook and reputation. If you think that sounds like an exaggeration, get up to speed on Southwest Airlines’ meltdown during the 2022 holiday season. It was a technical debt-driven debacle of the most public nature.
Instead of pretending technical debt doesn’t exist or fooling yourself into thinking that you can solve it once and for all, consider addressing technical debt as a recurring cost related to maintaining your technology investments. If you’re ready to get a handle on your organization’s technical debt, let’s talk about some ways to smooth the process through better planning and using a data operating system.
The first step in any planning process related to system replacement is to understand what is in place and how the necessary future state looks. Unfortunately, many legacy systems are difficult to navigate and hard to query. As a result, it can be quite a task for any team, technical or not, to dive into legacy systems and validate precisely what data each system contains, its structure, what the replacement system will need to create, and which subsets of that data you need to extract and migrate before retiring the system.
A data operating system, such as DataOS from The Modern Data Company, can immensely benefit this situation. A data operating system can be laid on top of any legacy system and tied into its data. It then provides a modern interface layer that easily interacts with common reporting and analysis tools. These tools can request data from the data operating system, which in turn handles extracting the data from the legacy system. This seamless connectivity lets people use the tools they are comfortable with, even when working with data from a legacy system. As a result, inventorying and profiling the data is much easier and faster than using the tools embedded within a legacy system. This is especially true given that few people will be knowledgeable about and comfortable with those legacy toolsets.
Looking into a specific system is a start. However, to unlock the maximum power of corporate data, it is necessary to mix data from different systems and allow each data source to enhance the others. Various architectures, from data warehouses to data lakes, have attempted to help solve this problem over the years. However, those solutions entail extracting data from the source systems and copying it into a more analysis-friendly environment. Even if enterprise-level data repositories of this nature are in place, it is unlikely that they contain 100% of the information contained within source legacy systems. This discrepancy is because only the most important and widely used data is typically extracted and loaded into an environment such as a data warehouse or data lake.
Before retiring a legacy system, however, it is a good idea to ensure that there aren’t other data worth preserving that never made it out in the past. A data operating system makes this exploration seamless because any system already mapped can be joined and mixed with any other system. The data operating system will handle the access and movement of data from each underlying system to facilitate any query. Just putting a layer over unchanged legacy systems isn’t the long-term solution. But in the short term, it allows for a thorough exploration of an organization’s data to document and plan what data will be migrated and kept and what data a replacement system will need to generate.
Once the data from the first two phases is understood and documented, the next step is determining what functionality users will need and how to deliver them. There is no better way to help users decide their requirements than to allow them to access prototypes that can be created and updated quickly. Using prototypes to define and validate requirements mitigates risk and streamlines the development of a final solution.
Laying an application on top of a data operating system’s map of corporate data enables application prototypes to be developed quickly and easily. Interfacing with the data operating system can happen through fully modern tools and protocols while the data operating system handles the messy details of dealing with the legacy systems. Over time, prototypes will evolve into production applications that can sit on top of the same operating system as the initial prototypes. Modernizing applications and access to data is a crucial component of retiring technical debt, and this phase gets an organization a good part of the way there as far as the front-end applications go.
While bolting modern applications on top of legacy systems is a start, it has some shortcomings. First, the legacy systems are still in place, even if users have better access to their data and more functionality. Next, performance won’t be optimal because there is still a dependency on the legacy platforms to serve the data. Finally, as legacy systems age, they become more prone to failure and to having underlying code that contains now-unsupported functions and components.
The final step in retiring technical debt is to replace each legacy system over time. The beauty of doing this after a data operating system is in place is that the end-user interfaces don’t need to change. As a new system comes online, the data operating system can be repointed to map to the new system instead of the legacy system. While technical work is still necessary to make that remapping occur, it will be seamless for the end users and won’t require application changes. Instead, those applications will make the same data requests as always, but the data operating system will execute them differently by making the most of the modernized replacement systems. At this point, there are updated applications sitting on top of updated systems connected by a modern layer of technology that, when combined, retires your technical debt! When systems eventually need to be replaced again, repeat the process. It’s a winning approach.
Following the steps outlined, you can modernize your handling of data as you retire technical debt. While it will still require time, effort, and money to retire your technical debt, it will require much less than it would have using approaches of the past. Following the steps outlined here will also help you avoid having technical debt blow up in your face, as happened so publicly to Southwest Airlines.
A data operating system, such as DataOS from The Modern Data Company, can be used to assess and diagnose where debt exists up front, how to best mitigate that debt, and can also be a part of the final solution. Once an architecture based on a data operating system is in place, it will make future technical debt much more manageable. Just make sure to budget the costs as part of ongoing maintenance efforts so that your technical debt doesn’t again grow to a dangerous level in the future.
To learn more about how a data operating system like DataOS can help your organization modernize its systems and end-user functionality, download our e-book “Modernize Your Data Architecture Without Ripping and Replacing.”
Technical debt is an issue that often isn’t given the attention it deserves. Companies can even get away with ignoring it for quite a while. However, once it rears its ugly head, technical debt can be incredibly costly both in terms of money and reputation. Look no further than Southwest Airlines’ meltdown during the 2022 holiday season for an example of technical debt causing massive problems that hurt a company’s reputation as much as its balance sheet. Luckily, there are some steps that organizations can take to start addressing technical debt without breaking the bank.
As with anything, technical debt can’t be solved all at once. Resolving it takes concerted effort over time. At a minimum, the following stages must be completed before technical debt can be considered paid off:
Many companies struggle to even get past the first stage. It’s not that the technical debt isn’t recognized within pockets of an organization, such as IT or the users of a given system. It’s usually that the senior executives holding the budget don’t truly understand the magnitude of the problem. This could be due to employees not feeling comfortable with telling the executives how bad things are or due to a non-technical executive simply not understanding how to quantify the risk that exists.
Let’s assume you and your organization can get past the first stage. Congratulations! As you navigate the remaining stages, making use of a new technology called a data operating system, such as DataOS from The Modern Data Company, can make the process of removing technical debt go faster and more smoothly.
One challenge in this stage is that there can be multiple legacy systems in place that are not well (if at all) integrated. This makes it particularly hard to analyze information across those systems. A data operating system lays on top of existing systems, even legacy systems, and provides an inventory of the data assets within each system. This requires no changes to the underlying platforms outside of allowing the data operating system to have access. Once the corporate data has been mapped, the data operating system creates a single, central entry point that allows users to query and explore data across the enterprise. It also adds a cross-system security and governance layer that ensures that corporate policies are followed.
Once this layer is in place, it is much easier to perform analysis to understand the extent of the technical debt. Data can be compared to see if it matches across different systems. New ways of combining the data from different systems can also be explored to help develop the requirements of the future. In effect, a data operating system enables an organization to explore data as though it wasn’t locked inside a mix of outdated and insufficiently functional systems.
Being able to see all of the data together helps validate accuracy, identify problems, and solidify new requirements. That’s a winning combination. Using that information, it is then possible to attack stages 3 and 4 to come up with a modernization plan that you can have confidence in.
The good news is that as you begin to upgrade and replace your current systems in stage five, the data operating system that was put in place to help with the scoping and planning work in stages two through four can stay right where it is. The operating system will maintain its cross-system security and governance layer that ensures compliance while work is done to modernize the underlying systems.
The data operating system also adds a layer of abstraction on top of the other corporate systems so that it serves as a centralized service. Downstream processes and applications can be redirected to access the data operating system layer. In turn, the data operating system layer can be adjusted to make use of new systems and functionality as they become available. A single redirection of the operating system layer will flow automatically to all the other downstream processes and applications.
While a data operating system might at first sound too good to be true, it is really a natural extension of the evolution of APIs, services, and system interconnectivity. It can help you isolate and minimize interaction with outdated systems by centralizing that access within the operating system. Having enterprise-ready security and governance adds more power since levels of sophistication that may not be available within any given system can still be laid on top of them.
If your organization has a lot of technical debt that needs addressed today, then start by learning about DataOS from The Modern Data Company. The first and most robust data operating system, DataOS is helping companies modernize their data and analytics functionality. DataOS can be used to assess and diagnose where debt exists up front, how to best mitigate that debt, and DataOS can also be a big part of the final state. Consider making use of DataOS today to — hopefully — avoid your own Southwest-style meltdown caused by pent up and unaddressed technical debt.
To learn more about how a data operating system like DataOS can help your organization modernize its systems and end user functionality, download our e-book Maximize Your Data Transformation Investments.
It was hard to miss Southwest Airlines’ holiday travel fiasco earlier this year. After a winter storm blew through a large swath of the United States, Southwest’s systems and processes had a complete meltdown. It took thousands of canceled flights, many days, and countless disgruntled employees and customers before things got back to normal. While the weather certainly was a catalyst for the mess, it is widely understood that a high level of technical debt within Southwest’s operational systems made a bad situation much, much worse. This blog post will explore some of the factors that led to the meltdown and offer some ways organizations can avoid similar trouble with their own outdated systems.
Outside of the storm itself, there were multiple contributors to the problems that Southwest had across its’ operations. One factor is that Southwest doesn’t follow the traditional hub and spoke model followed by most airlines. Instead of having planes go out and back from a hub, its planes each go on their own circular route. This is helpful for avoiding major issues when any given city has a disruption. For example, a winter storm in other airlines’ hubs will cause ripple effects across the country since flights everywhere can depend on passing through the trouble spot. In Southwest’s case, any given city won’t have a big impact on operations. However, a weak spot was exposed when a massive disruption hit many cities at once during a peak travel time. As more airports were impacted, Southwest’s more complex flight structure was stressed until it broke.
Along with its unique flight structure, Southwest traditionally has its planes fly more flights per day with higher loads per flight. This means that once things get messy, it is harder to recover. This is especially true in the holiday season when there were few free seats to be found to accommodate the passengers impacted by cancellations. When things are running smoothly, having full planes and little spare capacity is terrific. The same things become a negative when trouble hits.
One last factor that is tied to the prior two is that Southwest ends up with planes, pilots, and crew scattered everywhere to support its unique style of operation. Whereas other airlines can focus on getting everyone back to a hub and resetting the system from there, Southwest can’t do that. When crews time out at a hub, there will likely be other crews available to step in. In Southwest’s case, surplus crews just don’t exist in many markets, and it isn’t a simple matter of flying some more in from a hub. Rather, there is a complex game of musical chairs that must take place as crews are redirected from one place to the other while ensuring things will be covered once they leave their current spot.
While those non-technical factors certainly did add to the holiday mess and certainly aren’t irrelevant, a major factor was Southwest’s widely-known and acknowledged technical debt within its outdated systems. Southwest’s unions have been so concerned about the outdated systems that they even prioritized asking for systems to be updated above asking for more pay. You know it’s serious when employees put something above their paychecks!
Due to the outdated systems, crew members often have to call in to let the airline know where they are and to ask for instructions on what to do next. During the holiday mess, crew members were often waiting for hours for their call to get through, which only delayed things further. In 2023, this seems crazy since certainly there must be a system that knows what flight the crew was assigned, knows the flight was late, and can update the crew scheduling and support systems, right? Apparently, this isn’t the case as the systems don’t talk well enough to handle such seemingly basic tasks.
While Southwest’s technical debt within its outdated crew and aircraft scheduling systems has been discussed for years both internally and in the press, it will certainly become a major focus that will be addressed aggressively moving forward. The airline has estimated the costs of the debacle to be in the many hundreds of millions of dollars thus far. The inevitable lawsuits and other yet unseen costs will only take it higher. Suddenly, the painfully expensive system upgrades required look to be the less painful option than continued repeats of what happened in December.
Most people would suspect that Southwest’s only option is to make do with what they’ve got while they work as fast as they can to upgrade their legacy systems. While it is true that those legacy systems must be replaced, it is not true that there’s nothing else that the company can do in the meantime. A new concept called a data operating system can help improve what’s in place today while also helping to integrate updated systems once they are online.
A data operating system sits on top of any current system, even legacy ones. It inventories the data assets within each system and creates a central mapping of all corporate data across all of the systems. This requires no changes to the underlying platforms outside of allowing the data operating system to have access. Once the corporate data has been mapped, the data operating system creates a single, central entry point that allows users to query and explore data across the enterprise. It also adds a cross-system security and governance layer that ensures that corporate policies are followed.
While a data operating system might sound too good to be true at first, it is really a natural extension of the evolution of APIs, services, and system interconnectivity. By taking advantage of the fact that even legacy systems allow query access, a data operating system can enable modern functionality on top of even legacy systems. By accessing each platform’s data and allowing the data to be mixed and matched with that of other platforms, a data operating system updates an organization’s entire infrastructure with a modern veneer that allows users the access to the data and analytics that they require. Over time, as the underlying legacy systems are replaced, the data operating system can simply repoint from the old system to the new one, and end-user functionality will continue uninterrupted.
If the idea of a data operating system sounds appealing, then start by learning about DataOS from The Modern Data Company. The first and most robust data operating system, DataOS is helping companies modernize their data and analytics functionality and access even when there is a substantial legacy system presence. While the damage is already done at Southwest, they could make progress immediately by leveraging DataOS alongside their legacy system modernization initiatives. You and your organization can make use of DataOS today to — hopefully — avoid a Southwest-style meltdown of your own.
To learn more about how a data operating system like DataOS can help your organization modernize its systems and end user functionality, download our e-book Maximize Your Data Transformation Investments.
When discussing self-service tools for data and analytics, people usually place their focus on the goal of enabling non-experts to do more things for themselves. While enabling citizen data scientists, citizen data engineers, and others to do more with data and analytics is a noble goal, there is another perspective to consider. The same tools and functionality that experts implement to democratize data and enable non-experts to do more can simultaneously be used by experts to increase their own productivity. This increase in expert productivity has the potential to drive a massive return that is often not recognized or pursued.
The traditional view of self-service is that automated modeling tools or streamlined data mapping tools are best targeted to non-experts. The tools are used as a way to enable non-experts to do more for themselves and to free up experts to focus on other things than supporting users’ basic needs. The concepts of citizen data scientists and citizen data engineers arose from this approach. It isn’t an approach without merit, but it also has risks.
Unless the tools are tightly managed and usage is tightly governed, there is a risk that these citizen scientists will use the tools to do things that are incorrect. Worse, the non-expert won’t be able to identify that there is an issue due to their limited depth of technical knowledge. For example, it is common for a non-expert to build a predictive model using a point-and-click tool without realizing that their problem is ill-formed and that the data they are feeding the model is not appropriate for their needs.
Managed well, self-service tools can largely minimize or avoid the risks mentioned while enabling more people to perform data and analytics tasks. The traditional view of enabling citizen data scientists and engineers is valid and achievable, but it isn’t the only path to value.
The same tools that are implemented to enable non-experts can also help reduce the workload and increase the efficiency of experts. This is because an expert can use the same point-and-click environment to get work done faster. For example, a data science expert knows how to properly define a model and how to feed the correct data into that model. Instead of coding a process manually, a self-service tool can be used to expedite the process. Similarly, a self-service data pipeline tool can be used by a data engineer to create a new pipeline more quickly.
The important point to recognize here is that experts want to streamline their efforts and be more productive just as much as non-experts do. If self-service tools that are often aimed at non-experts can also be used by the experts themselves, then that’s a huge bonus! The reality is that experts will be able to fully understand the pluses and minuses of a self-service tool and then begin applying the tool very quickly. The experts will also be able to push a self-service tool to its limits and learn what needs it best matches. Having the experts test drive a new self-service tool before releasing to a wider audience is a great way to not only drive extra value from the experts but also to develop a plan for effectively rolling it out to the non-expert users.
To get the most from self-service tools, it is necessary to have access to a broad range of data and to have the tools plug into existing governance mechanisms. Is there a way to make this easier? Yes! A modern data operating system, like DataOS from The Modern Data Company, provides the underlying platform that can enable successful rollouts of self-service tools to experts and non-experts alike.
A data operating system connects to all source systems, whether new or legacy, and makes a single view of all corporate data available. Self-service tools can point to this layer to provide a user with visibility into all the data the user has permission to use. Of course, security is important, and a data operating system can apply finely-detailed permissions so that any given user can only see and access the data they are allowed to see. By defining permissions once in the data operating system layer, the security settings will cover all downstream applications seamlessly.
When users request an analysis, the data operating system then coordinates the data requests to the various underlying systems where the data resides. It will compile the results and feed them back to the application that requested it. Just as a self-service tool allows experts and non-experts alike to get more done with less code, a data operating system makes it much easier to access and govern a wide range of data sources. When a data operating system is combined with a self-service tool, there is speed and efficiency gained across all aspects of an analysis. That’s a strong value proposition worth understanding and exploring.
To learn more about how a data operating system like DataOS can help your organization democratize data and enable more self-service, download our white paper, A Paradigm Shift in Data Management.
Self-service has been a hot topic in the analytics space for many years already, and it won’t be going away any time soon. Given that, it is reasonable to ask: why is it that with all the focus it has received, self-service capabilities are still far off from where organizations want them to be? Is this due to the self-service concept being overhyped? Poor execution and implementation? Insufficient supporting technology? Something else? Let’s dig in and discuss.
We’ll start by outlining some of the common concerns and objections about self-service analytics before covering why these concerns and objections can be overcome and self-service made a reality.
Realizing the vision of self-service is not a simple task. Self-service is about enabling people to easily do things that they don’t know how to do themselves using the experts’ toolsets. For example, those without SQL knowledge can still generate queries through an interface. Or, those who don’t know machine learning tools can still build a basic predictive model through a guided template. Thus, a self-service tool must take something complex and make it easy for someone with limited knowledge or understanding of the underlying process to safely make that “something” happen. Of course, without proper guard rails, a self-service tool can clearly be a dangerous thing to hand over to users. Much like putting a bunch of ingredients on a table and letting children start trying to make cookies, you have to be careful to predefine views of data and recipes for models so that the self-service users are channeled down the proper path.
Another challenge with self-service tools is that they are configured to enable specific sets of analytical logic to be executed. However, there is a limitless number of business questions that can be asked and a limitless number of ways that analytics can be created to address those questions. As a result, no matter how strong a self-service tool might be, it is easy to identify questions that it just cannot answer. And that’s okay! Self-service analytics aren’t meant to be all-encompassing. They are meant to facilitate a broader range of users and a more efficient allocation of resources.
Yet another reality is that business users — especially senior executives — don’t have a lot of time and want their questions answered via the path of least resistance. This often leads to the executive requesting that a member of the analytics team execute an ad hoc analysis instead of using a self-service tool to do it themselves. This frustrates and distracts the expert from more complex work, but they can’t turn down the request. However, if a self-service tool is configured to easily answer questions so that no custom work is needed, the expert can quickly get the answer with limited distraction.
Now let’s look at the ways that self-service analytics can become a highly valuable reality.
First, self-service may not be perfect, and it certainly isn’t all encompassing. However, don’t sell it short. It is tempting to take the glass-half-empty view and declare that self-service tools are insufficient since they can’t handle as many cases as we’d like. A better, glass-half-full view acknowledges that self-service tools can answer a range of questions, even if they can’t answer all of them — every question answered in a self-service fashion is one less question that the experts have to handle!
Next, if you take the time to look around your organization, there’s a good chance that you’ll find a wide range of self-service successes. Don’t let those successes be forgotten or minimized just because self-service hasn’t solved everything. Self-service will never be fool-proof and it will never be complete. If forward movement is occurring and your organization is effectively expanding its self-service capabilities, embrace the progress as you push for more.
Finally, in the drive to enable more self-service, your organization will achieve a lot of positive things: better understanding of the breadth of data available, more complete cataloging and definition of both data assets and business problems, and increased usage of analytics as your organization becomes more data-driven. Accept that your self-service journey will never be complete and instead focus on all the waypoints that you pass as you continuously improve your capabilities.
To be successful with self-service, an organization must find the right mix of automated and standardized processes that can be deployed alongside a team that can handle any questions not yet automated. It must be recognized that not everything will be automated. A new, never-before-asked question might need some personal handling before it can eventually be automated and made available in a self-service fashion.
The key to success is to have a common platform that can handle all your self-service and ad hoc requests equally well. If ad hoc requests are being answered using the exact same underlying data, governance, and security infrastructure as self-service requests, then it becomes that much easier to migrate a new process from manual ad hoc status to fully self-service. Getting scalability and consistency in place is a crucial first step to a successful self-service journey.
One option to enable all of this is a data operating system such as DataOS from The Modern Data Company. A data operating system can identify, catalog, and govern data from any type of system from one central entry point. Better yet, it can apply consistent governance and security protocols to all data requests regardless of their source. This makes managing and implementing self-service tools easier than ever before. To learn more about how a data operating system like DataOS can help your organization democratize data and enable more self-service, download our white paper A Paradigm Shift in Data Management – Generating Reports.