
How to drive trusted decisions without changing your current data infrastructure.
Learn more about DataOS® in our white paper.
Hospitals and healthcare organizations have no shortage of obstacles moving into 2023. Despite this dire proclamation, the healthcare industry has the opportunity to transform itself as long as it sets the right goals and supports them with analytics. Let’s look at some critical goals healthcare organizations can leverage data to support in order to build a strong, innovative foundation for the coming years.
When the COVID-19 pandemic began, Congress enacted the Families First Coronavirus Response Act (FFCRA) in response to the swelling healthcare needs of the general public. It prohibited states from disenrolling individuals from Medicaid while the public health emergency (PHE) was in effect, a condition of a temporary increase in the federal Medicaid match rate. As a result, total Medicaid and CHIP enrollment grew to 90.9 million by September 2022, almost a 20% increase from February 2020.
However, Congress is currently set to end the continuous enrollment provision this year and phase down the match rate. This unwinding process will likely see around 18 million people drop off Medicaid due to redetermination of qualifying status, with nearly 4 million of those becoming uninsured. Although the effects of unwinding will depend a lot on each particular state, hospitals and healthcare organizations must begin to prepare for the confusion.
Much of those preparations will involve an investigation into the local impacts of the redetermination. Important data would include:
In addition, predictive inquiries could be:
The end of PHE will cause some upheaval. Still, healthcare organizations can leverage data tools to understand the current state of healthcare in their area as well as make better predictions about what to expect based on that data.
We almost titled this section “embrace agile,” but it’s more than that. Nearly every industry is embracing some level of agile operation because if the pandemic taught us anything, it’s that change happens. The most resilient organizations are those able to pivot quickly while making great decisions and continuing to serve their customer bases — or in the case of healthcare, patient bases.
The key foundation to this particular resolution is ensuring that data-driven decision-making is the norm. Many healthcare organizations are in the process of flattening management structures to improve hospital efficiency. Others are making tough decisions for what programs and services to keep and which to cut in order to reduce costs. Both of these infrastructure-related decisions — one organizational and one finance-focused — will require reliance on top-quality, real-time data to ensure a good decision.
Ideally, healthcare organizations will be able to harness all data from multiple systems, including legacy ones. This data can help make key decisions easier. For example, hospitals pushing into care that happens before or after a hospital stay — think ambulatory care or hospital-at-home programs — can determine the type of care that will best serve area patients while minimizing financial risks to the organization. In addition, a hospital might determine when it’s most beneficial to bring in a physician as a full-time medical staff member instead of relying on partnerships with private physicians.
Whatever the decision is, grounding it in real-time data will allow hospitals to improve decision-making. They’ll be able to shift services more quickly to meet demand while optimizing existing infrastructure and resources.
Healthcare organizations must do more than cut costs to succeed. There must also be strategic initiatives to grow and expand services and acquire new patients. Organizations might be tempted to focus only on restrictions in response to continued disruption, but this won’t serve their best interests in the long run.
A recent McKinsey survey noted that a majority of respondents planned to focus on diversification over the next three years. This comes in response to industry trends that are seeing an increase in different types of care outside the traditional hospital setting, including ambulatory care and hospital-at-home programs as mentioned above. Currently, according to the McKinsey study, rising costs and a reliance on investment income in the nonprofit healthcare sector leave many organizations vulnerable to economic factors.
As a response, diversification of services could help prevent major upheaval each time a disruption occurs. Bringing primary care doctors into full hospital employment, an example mentioned in the previous section, is one way these organizations are adding key service components that bring value to patients while generating new revenue.
Growth requires a clear look at the data, but silos in healthcare data can prevent organizations from finding new opportunities and understanding what services will bring the greatest return. Building a unified data ecosystem can help facilitate coordinated, strategic growth.
Healthcare organizations and the industry itself understand the way that disruption can derail services and change the way hospitals operate nearly overnight. Surviving and thriving in the modern healthcare landscape means adopting a more agile, flexible approach to delivering services and making decisions —balancing the need to be efficient with the necessity of innovation. And with major changes like redetermination on the horizon, getting that balance right is more important than ever.
Building data-driven decision-making into the equation can help healthcare organizations understand and forecast the impacts of changes like redetermination, balance efficient growth with cost efficiency, and better understand the patient community they serve.
Learn more about how healthcare organizations can more easily reap the benefits of becoming a data-first organization in our white paper, Data Mesh + Patient360: A Modern Revolution for Healthcare Data.
Is Your Head Too High up in the Cloud?
There is no doubt that the cloud is here to stay and that it will be a part of every company’s future data and analytics strategy. However, knowing that the cloud is an important piece of the puzzle does not mean that companies aren’t making a lot of mistakes with cloud migrations and implementations. While there are many cloud success stories, there are also a lot of stories of frustration, missed deadlines, cost shocks, and lack of anticipated results. This blog post will discuss some of the common causes, which have nothing to do with technology and everything to do with poor planning.
The cost structure of the cloud is fundamentally different than that of on-premise platforms, and the difference is not always guaranteed to be in favor of the cloud platform. Where any given company falls on that spectrum is tied to how well it understands the difference in cloud cost structure and how well it adapts its practices and protocols to account for it.
Charging for every CPU cycle and every piece of disk storage used is central to cloud models. On the surface, this sounds great. After all, why would you want to pay for resources you aren’t using? The problem is that a lot of on-premise high-volume processing isn’t very efficient. When you own the computers and they are sitting mostly idle at night, the incremental cost to have an inefficient analytical process execute is virtually zero. As a result, the common target for coding efficiency in an on-premise model is to get things efficient enough that they don’t interfere with other needs.
The “efficient enough” model makes sense when you own equipment and it has spare capacity. However, coding to that standard will rapidly consume your budget if it is done in a cloud environment. Therefore, code efficiency is more important than ever in the cloud. Unfortunately, many companies realize this the hard way after seeing huge and unexpected charges caused by simply migrating existing code as-is to a cloud environment.
Another common error with cloud migrations is for companies to assume that moving to the cloud will in and of itself fix existing process issues. Most process issues aren’t about technology, but about the policies that surround the technology’s usage. For example, many companies struggle to keep an up-to-date inventory of all of the various analytical processes that are in place, what data they utilize, and what their output is used for. If the process for compiling this information isn’t meeting expectations in an on-premise environment, a cloud migration is simply going to shift the problem somewhere new.
It is important that organizations carefully consider which of their problems can be solved by a cloud migration and which can’t. For example, meeting intermittent needs for a massive amount of processing is easily addressed in a cloud environment but documenting the processes involved in that processing is not. There is no way around the hard work to understand what’s working, what isn’t working, and how to fix it. Many of the problems companies have that surround their technology implementations are far more process driven than technology driven. Disappointment is the inevitable result when it is recognized too late that a problem won’t be solved just by moving to a new platform like the cloud.
One factor that can lead companies astray is the desire of key employees to accrue certain experiences on their resume. At the start of the big data era in the early 2010’s, implementing Hadoop was considered a prime resume builder. As a result, many technology executives chartered Hadoop projects as much to get one under their belt as to meet a clear corporate need. Today, the same pattern can be seen with cloud migrations. Every technology team member, from entry level to senior executive, wants to be able to say they were a part of a cloud migration.
That is not to say that most companies shouldn’t move to the cloud, nor is it to say that the intentions of those sponsoring the projects are always corrupt. The impact of this issue is typically more along the lines of moving things a little too fast with too little planning in order to get the cloud migration box checked sooner than later. If having a cloud migration on a resume wasn’t so appealing, many organizations might be more methodical and cautious in their approach.
Many companies have historically struggled to fully deploy data and analytics processes over the years. While there are tools in the cloud that facilitate deploying processes, it still isn’t a simple matter to deploy complex analytical processes in the cloud. As discussed above, there are technical as well as process barriers to successful deployments. If a team hasn’t been effective with deployments in a traditional environment, don’t assume the cloud will solve that problem just because the cloud has a lot of native tools to help.
A rational approach is to recognize that the tooling available in the cloud can help streamline and standardize deployments. However, it is still necessary to have a process and to maintain discipline when deploying. Realistically, the first deployments done on the cloud will probably go even slower and less smoothly than in the past as teams get to know the new cloud environment and its new protocols. Don’t look past that learning curve.
One new technology that can help make a cloud migration successful is to implement a data operating system such as DataOS from The Modern Data Company. A data operating system provides a single, up-to-date view of all systems from one place, whether cloud or otherwise. It inventories all available data, applies security and governance protocols, and routes data requests and queries properly across the underlying systems. As legacy systems move to the cloud, simply redirect the data operating system to the new location and users will see a seamless transition.
A data operating system also helps with the problems outlined previously in this blog post. While a data operating system won’t change the cost structure of the cloud, it will provide additional visibility into what requests are being sent to any given cloud instance and how that data is being combined with other data. This will help an organization understand its cloud usage better. Similarly, a data operating system won’t magically fix broken processes. However, it does make it easier to track, tune, and manage them. It will also ease deployment challenges since new processes can be deployed from a central location. Last, while it can’t stop people from doing things for resume enhancement, implementing a data operating system is itself a positive resume builder. So, for those sensitive to such things, implementing a data operating system will be appealing for that reason even as it provides other benefits.
To learn more about how a data operating system like DataOS can help your organization modernize its systems and end user functionality, download our e-book Maximize Your Data Transformation Investments.
Technology moves fast. Sometimes solutions to big challenges already exist, but more often, a problem appears before a solution. Companies must then take creative measures to “fix” technology challenges, leaving them with temporary solutions that quickly obsolesce. You can’t blame companies for playing the cards they’re given, but now data debt is costing companies more than they think, even when solutions seem to be working…for now.
Data debt is similar to technical debt. It’s the combined cost of continuous reworking and troubleshooting, where each new solution stage creates further challenges. Over time, teams spend more and more time trying to fix things instead of gathering value.
Data, unlike technology, seems simple on the surface. Companies gather and analyze data, use it to answer questions, and predict or understand behaviors. However, the data landscape is deceptively complicated.
Data debt is caused by:
Animesh Kumar, co-founder and CTO of The Modern Data Company, distills these ideas down to “the four horsemen of data debt.” According to him, companies of all types and sizes grapple with the following:
As a result, companies that aren’t technology and data forward are incapable of realizing the full value of their data. No matter how much data they collect, what tools they buy, and what talent they manage to lure away from companies like Google and Facebook (if they can), data remains a source of powerful but hidden potential.
According to data expert John Ladley, there are four quadrants for where businesses are when it comes to accumulating and understanding their data debt:
But what are these costs? Let’s look at a few scenarios.
One result of data debt is a lack of data quality due to ineffective governance. An organization may strive toward data-driven decision-making, but if it cannot trust its data, it won’t succeed. This is particularly common at the data illiteracy tier because the company will continue to replicate errors in data with each successive use.
For example, imagine an organization wants to target a new market for services based on purchasing data from the area for the last three years. They aggregate their own data plus data from partners and task marketing to create personalized advertising campaigns.
Unfortunately, the marketing team doesn’t have a complete picture of these potential customers because of inconsistent data. The marketing campaigns don’t have the expected ROI, and the company loses some of this potential market share to a competitor.
It’s not just the business side that’s suffering. It’s the result of “vague data modeling and suboptimal storage mechanisms,” according to Kundera. At the data realization stage, companies are highly susceptible to this cost because they’re at risk of implementing fancy solutions with no real long-term strategy.
Data swamps are difficult to query and complicated to maintain. They emerge because enterprises are trying to manage data from multiple sources with no overarching plan and because of silos that make sharing between departments very difficult. It prevents collaboration between business teams and IT on coherent business models necessary for decision-making.
Data swamps cost a lot in the short term because business teams must wait a long time for answers to queries and for requested pipelines to be built. It’s challenging to make data-driven decisions with any kind of timeliness. The long-term cost is a spiral toward greater chaos in the data swamp as IT teams become overwhelmed trying to monitor and maintain databases and existing pipelines while working on new ones. The longer the swamp persists, the worse it gets.
It’s not just money that companies need to consider. Data debt also costs companies resources, time, and effort. When companies are data illiterate or resistant, they might attempt to follow governance policies but frequently make exceptions or ignore them entirely. This keeps processing costs and resources too high.
For example, imagine a company invests heavily in building an experienced data science team. However, that team discovers that fragile pipelines require constant reworking, putting most of this team’s knowledge and expertise toward backward-looking tasks like recovery and troubleshooting.
They could be capable of innovation in strategic tasks that skyrocket the data’s value, but that company will never know — as long as they refuse to acknowledge their data debt.
Companies can’t run from data debt forever; eventually, all debts come due. Companies need a way to integrate all data tools and sources into the data stack in a composable yet stable way. In addition, business users must have support to explore data through a governed, self-service portal and build stable pipelines for everyday decision-making.
Once this happens, IT teams can shift the majority of labor and resources from maintenance and troubleshooting tasks to higher-order activities. This allows companies to take advantage of IT expertise to build more complex data models that move the business needle forward.
DataOS is the world’s first data operating system. It’s designed with business users in mind to provide self-service dashboards and drag-and-drop engineering. Administrators can govern data access through attribute-based controls, and IT users can get behind the scenes to build the apps and tools the company needs for big data processing.
Find out how DataOS can help you chip away at your data debt.
Another year, another chance to learn more about the world of data. In 2023, The Modern Data Company (Modern) hopes to reach more companies and organizations with our data operating system, build incredible value from existing and upcoming data assets, and share insights into major shifts in what it means to be data-driven. If you haven’t been with us long, we had some incredible pieces in the past few years. If you have, it’s time to revisit what you may have forgotten or missed.
These were our most popular blog posts in 2022 according to reader statistics. Let’s catch up and revisit what you loved. You’ll want to save this post and its content for valuable reading all month long.
You’ve heard the doomsday noise: customer 360 is dead. Customer 360 is the way forward but at what cost? Customer 360 is impossible, so stop trying. Customer 360 is a wonderful ideal but ultimately untenable.
So they say.
We heard a lot about the future of customer 360 in 2022, with well-intended voices encouraging businesses to finally drop it. However, at Modern we believe that companies don’t have to abandon customer 360 simply because they can’t find a way to integrate data sources. We strongly believe we’ve solved the problem of data silos and have made integration worries a thing of the past. Customer 360 isn’t just an idea; it’s here, thanks to DataOS.
Read how we did it here: You Don’t Have to Abandon Customer 360
See also: Improve Your Customer Data Platform with
“A recent survey from Havas Group found that a staggering 75% of brands could simply disappear overnight and no one would notice or care. That statistic should shake companies and their marketing teams to the core.
Customer loyalty is low, and the age of personalization has arrived. Businesses hoping to survive and thrive long-term have put dynamic customer experiences at the top of their priority lists. The goal? Increase customer lifetime value (CLV).“
Thus begins another popular Modern blog post from 2022. With such a severe statistic on the table, companies need a way to manage their customer data with finesse and granularity. Consumers are more than willing to share personal data with companies — so long as they see the positive results from that trust in the form of an unforgettable customer experience. And our blog post shows the first steps towards unifying data once and for all.
Read all about it here: Improve Customer Lifetime Value with a Unified Data Platform
See also: The Lack of Unified Data Operations Is Limiting Your Customer Experience Innovation
Companies experience two major challenges when it comes to digital transformation:
This blog post explores exactly what it takes to build data models that capture and drive business value. We also explore the secret ingredient to a well-made model and therefore a well-developed model for digital transformation. It really is this simple.
Read how to make it happen here: Want a Model for Digital Transformation? First, You Need a Data Model
See also: The Modern Guide to Finding the Hidden Value of Data
At Modern, we’re passionate about data. Our internal resources and discussion boards are profound examples of this dedication and creativity. And at MIT CDOIQ, we finally got the chance to bring our passion for transformed data to a broader audience.
MIT’s Chief Data Officer and Information Quality (CDOIQ) symposium is a key event for sharing cutting-edge ideas about the world of data and information technology. Modern was there to hear expert talks and network with like-minds, hearing their most pressing concerns about the future of data. We also had the opportunity to talk about DataOS and the new paradigm it presents to the world of data.
Read all about it here: MIT CDOIQ: The Modern Data Company Introduces DataOS
See also: DataOS® – A Paradigm Shift in Data Management
We didn’t happen upon the data operating system by accident. Our founders — and, as a result, the entire company — think about data differently than everyone else. This post highlights exactly how our founders taught us to think differently about data and why it matters. Here are the cornerstones of this new paradigm:
Read the details here: 3 Ways the Modern Data Company Thinks Differently About Data
See also: The Six Essential Functions of the Modern Data Platform
It might seem strange to announce a company after working with big clients and building a thriving startup over the last few years. But here’s the thing: our story went from “unlocking data value” to “reinventing the entire way we think about data,” and that deserves a brand new introduction to who we are and why we do what we do.
Read all about it here: Announcing: The Modern Data Company
See also: The Story of Our Brand is the Story of Modern
The healthcare industry is at a crossroads. Patient data is crucial for delivering high-quality care and research that puts desperately needed new products and services on the market. The challenge is protecting this very sensitive patient data while ensuring that those who need it — healthcare providers and researchers — can readily access it.
So how does the healthcare industry balance these two competing ideals, protecting data and freeing it? And on top of that challenge, transforming with minimal disruption to daily operations?
With a data operating system.
Find out how it all works: The Key to Digital Transformation in Healthcare – Data Integrity
See also: Harnessing the Power Behind the Healthcare Data Boom
What is DataOS? It’s the world’s first holistic data operating system on the market. Why is DataOS? Because of Modern’s four pillars:
If any of these are surprising, they shouldn’t be. We outline exactly what these mean to DataOS and Modern as a company and why they should be the foundational ways companies think about their data.
Read it here: Four Pillars of DataOS
See also: The Core Principles of a Modern Data Platform
Retailers are under increasing pressure to deliver dynamic customer experiences, understand what consumers need at any given moment, and personalize, personalize, personalize. They put so much time and resources into building the right customer experience; all it takes is one security breach to lose everything.
Retailers must build resilient digital infrastructures while modernizing legacy systems and leaving no security loopholes — all without any significant disruptions to operations that would send customers to competitors.
Find out how they can make it happen here: How Retail Can Build a Security-First Data Architecture
See also: R.I.P. Rip and Replace – Discover the Better Way to Modernizing Your Retail Data Architecture
Sometime in 2020-2021, data fabric was everywhere. Experts couldn’t stop mentioning it. Analysts couldn’t stop discussing it. And organizations couldn’t stop asking, “What is it?”
As it turns out, companies are still wondering how data fabric fits into their digital transformation efforts. And while Modern has moved beyond “a data fabric company” to a broader effort to reinvent the way we think about data, we want everyone to know:
Organizations can build a resilient, comprehensive data fabric using DataOS.
Read more about how DataOS enables data fabric, data mesh, and other cutting-edge infrastructures here: Data Fabric is the Answer to Your Agile Dreams
See also: A Paradigm Shift in Data Management – Deploying A Data Fabric
We have resources and guidance for just about any industry and use case you could possibly imagine. Stick with us in 2023 and beyond to see how a data operating system like DataOS can transform your data infrastructure, rebuild your data pipelines, enable enterprise-wide data literacy, and more.
Eager to get started? Schedule a demo and see it all for yourself.
Banking and Capital Markets are undergoing a period of transformation. The global economic outlook is somewhat fragile, but banks are in an excellent position to survive and thrive as long as they have the right tools in place. According to Deloitte’s report 2023 Banking and Capital Markets Outlook, banks must find ways to adapt to global disruption and understand the changing needs of consumers to find success. And much of this involves finally harnessing data and new technologies to the fullest potential.
The report lists several areas where consumer demand will shift banking products. Without a clear plan, financial institutions could face serious consequences to their bottom line. For example, consumers are less likely to tolerate “junk fees” and will begin shifting away from traditional banks to non-traditional and online options to avoid them.
The report mentions several key areas ripe for a data revolution. First, the retail banking space must contend with new technologies such as cryptocurrencies and develop applications that advance ESG initiatives, racial equity, and security. This means moving beyond product-centric thinking to a data-driven customer experience model that’s consistent across all channels.
Next, the wealth management industry is also shifting away from a product focus to a client-centric model. Data will enable this industry to shift to scalable solutions and ensure greater customer loyalty. Commercial banking is also at this precipice and must leverage data-driven decision-making to help customers deal with supply chain shocks, inflation, and a fragile economy.
Using an integrated and actionable view of data, banks and capital markets can develop:
In each of these, financial organizations need access to real-time data to understand customer behavior, identify key customer frustrations and pain points, and develop products that provide true value.
Banks and financial institutions are currently reassessing the value of traditional services and exploring boundary-pushing new products and services that can exist in the world as we know it. Geopolitical conflicts, recent monetary policy and regulations, and fractured payment systems can cause significant difficulty in delivering consistent, high-quality services to a global consumer base.
Deloitte has already identified these areas as potential opportunities:
In each of these opportunities, banks and capital markets will need to leverage not only data but also technologies that make processing data possible. For example, using artificial intelligence and machine learning, banks can better protect customer identities across multiple channels while ensuring that sensitive customer data remains absolutely secure.
Institutions in the banking and capital markets industry have massive amounts of data. They need a way to ensure the security and quality of data feeding new technologies such as AI-enabled tools, digital verifications, and customer 360 experiences.
The problem many banks have is that data tools are fractured and dependent on manual processes. These institutions need a holistic view of data that connects to all platforms and applications, including any legacy systems. Only then can they supply the amount of data required to understand customer behavior and deliver dynamic customer experiences.
In addition, financial data is highly sensitive and subject to multiple regulations and privacy requirements. If banks and capital markets can leverage centralized governance based on moving only the data necessary for processing and attribute-based access controls, they will be able to leverage more data faster.
The banking and capital markets industry is on the precipice of something new. Companies must pivot toward services that consumers crave while grappling with changing regulations and integrating new technologies and services. If they can streamline their data pipelines to provide actionable insights, they can usher in the next generation of financial services and survive the next global disruption.
DataOS is the world’s first operating system. It can provide banks and capital markets with a seamless data architecture designed to modernize legacy systems without replacing them and upgrade current tools with minimal impact to operations. Instead of a patchwork set of tools and multiple copies of data, users can customize dashboards for the insights they really need. In addition, business users can leverage drag-and-drop engineering to build pipelines without extensive coding.
If the finance industry can leverage an operational layer, fractured data ecosystems will become a challenge of the past. To find out how DataOS can make this a reality for your financial institution, contact The Modern Data Company for a demo and see it all in action.