The 2008 Global Financial Crisis fuelled the expansion of central banks’ mandate and data collection. New policies required the banks to collect data from a profusion of sources to anticipate the next crisis.
Other than financial stability, today, central banks also measure systemic risk, regulate and supervise banks, oversee digital currencies, and fight climate change. These responsibilities have furthered warranted collecting and accessing alternative data sources — ushering in the financial big data era.
Big data and related technologies have garnered attention in central banks. But, what does big data mean? It is an umbrella for both structured and unstructured data (processed using innovative technologies). In a special issue on “Big Data in Finance”, Goldstein et al. (2021) attempt to define big data according to three properties: (1) large size, i.e. sample size; (2) high dimension, i.e. the data have many variables relative to sample size; and (3) complex structure, i.e. these data are unstructured, including e.g. text, pictures, or audio.
Doug Laney, a pioneer in the field, defines it according to the “3Vs” of volume, velocity, and variety. In short, big data is any data set with many observations or unstructured data requiring new processing tools.
Central banks now have access to an unprecedented amount of data to facilitate monetary policy decisions, including internet data (social media and web-based activities). But most data is from micro-transactions between firms (e-commerce, credit card transactions), public statistics, and financial market data.
They harness big data to support monetary policy and financial stability functions i.e., economic analysis and nowcasting/forecasting or to derive sentiments from semi-structured data. For example, the ECB collects data from digital trading platforms to produce eurozone daily yield curves, and the Bank of Mexico (BoM) and the UK Financial Conduct Authority (FCA) combine web-scraping and text mining to audit promotional materials and financial advice documents from financial institutions (Di et al., 2019).
Banks are embracing big data. A 2020 survey found about 80% of central banks used big data as compared to 30% in 2015, and about 40% of them utilised it to inform policy.
Overall, central banks have not widely adopted it because of infrastructure, skills, and cost challenges.
Petropoulos et al. (2018) stipulate that financial big data might exhibit noise, heavy-tailed distributions, nonlinear patterns or, temporal dependencies. This means that traditional econometric methods are insufficient to analyse such data—requiring Artificial Intelligence (AI) and Machine Learning (ML).
Today, data is one of the most important commodities and, because central banks are inherently data-driven, they will continue to exploit big data, ML, AI and related technology to enhance their functions and processes. However, this will also continue to present hurdles, particularly relating to ethics, data quality and interpretation.
Interested in this topic? Visit the FIRE Centre and our project on AI and Monetary Policy.
Written by Sascha Steffen and Alexandra Kinywamaghana