To say that the COVID-19 pandemic has turned our world upside down is an understatement, made more so with each passing day. The impacts on daily lives — lockdowns, masks, the threat of grave illness, extraordinary economic dislocations and uncertainties, etc. — these are concrete and tangible to nearly everyone. But along with the many obvious changes that are taking place, a number of more subtle but equally powerful shifts are occurring that will have a profound impact on the way that AI will be viewed and used going forward by financial services companies and their customers.

As 2019 came to a close, no one was expecting predictive statistical models to dominate the collective consciousness, and yet the term "flatten the curve" has become a top global meme. Charts and graphs showing the expected progress of COVID-19 under different sets of assumptions have become front-page news. Even those who know nothing about statistics are increasingly aware that predictive models are being used to make extremely important decisions about many aspects of our lives.

For data science professionals, there's been a stark wake-up call regarding the speed with which the world can change and how rapid change impacts the ability of AI to drive good decisions. For AI systems to produce meaningful and valuable results, it's essential to train machine learning (ML) models using data that accurately reflects the underlying reality or operating regime about which the model is expected to make predictions. A good model, trained with stable data, is able to inform appropriate actions in the broadest possible range of anticipated operating conditions. In the normal course of events, the operating regime will change as consumer sentiment, regulations, technologies, competitive dynamics, methods used by fraudsters, and any other number of factors change. The key is to be able to retrain and redeploy models faster than changes in the operating regime. Equally important, especially in financial services, is the ability to simulate or "stress test" models to anticipate possible scenarios and understand how models will behave. Without frequent, rapid retraining and simulation, the value of an organization's models and the impact of their decisions can be highly vulnerable to changes in the operating regime, especially if the changes are large, rapid, and without precedent.

Consider a real-world example. Suppose you have a model that predicts municipal bond prices, and it has been trained with data from January 2010 through December 2019. During this time, several things held true — e.g., bond and stock prices are inversely correlated. What happens if there’s a one-week stretch in March 2020 where stocks and bonds both go down together — something never before seen in the training data? How will the model react? How quickly does the model need to be retrained? Does the model even hold anymore? And suppose there was a model that was trained during stress testing with simulated data that reflects the current regime; how long will it take to get that model into production?

As many organizations have found out — or will shortly — getting models from the lab into production and keeping them updated as operating regimes change is very challenging. In many large enterprises, the process of moving models into production is immature and replete with delays. Many departments are involved and need to sign off on the deployment after the data science team's work is done, including DataOps, DevOps, IT, compliance, and the line of business. In most organizations, getting from "model ready" to deployment takes weeks or months, with inefficient processes and organizational friction contributing as much or more to the problem as technical issues. As a result, the notion of responding effectively to a fast moving change in operating regime — like a pandemic — is simply not yet possible.

Financial institutions have been using models for decades and have dealt with crises before, so what's new? One key factor is that AI models are very different from traditional statistical/ decision-rules models and conventional software. Yet most enterprises have not yet organized themselves around the principles of enterprise AI in which traditional business, actuarial, optimization, and other types of models are modernized to be driven by ML/AI algorithms and operationalized, automated, and governed at enterprise scale.

A new discipline is emerging in the large enterprise called ModelOps that, in ways analogous to (but different from) DevOps, combines process, technology, and organizational alignment to enable models to move quickly from data science into production without compromising visibility, operational control, or governance. When implemented as an enterprise wide capability accountable to the CIO, ModelOps enables organizations to ensure they can get new and updated models into production as fast as the operating regime is changing. The alternative is to see AI investments squandered, or worse, to drive business decisions based on models that no longer reflect the world they’re operating in. This is the "Corona Effect", and those who are in the business of developing and using AI in the real world need to take heed.

For the moment, consider where operating regimes have shifted (and will continue to shift) as the pandemic peaks, ebbs, and returns to a "new normal.” The ability to respond to these unanticipated and potentially dramatic shifts in operating conditions as they occur is the ultimate goal of enterprise AI.