We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Session 1 - New data needs for monetary policy

00:00

Formal Metadata

Title
Session 1 - New data needs for monetary policy
Title of Series
Number of Parts
5
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Lecture/Conference
Lecture/ConferenceDiagram
Lecture/ConferenceComputer animation
Lecture/ConferenceComputer animation
Lecture/ConferenceComputer animation
Lecture/ConferenceComputer animation
Lecture/ConferenceComputer animation
Lecture/ConferenceMeeting/Interview
Lecture/Conference
Lecture/ConferenceComputer animationMeeting/Interview
Lecture/ConferenceComputer animationMeeting/Interview
Lecture/ConferenceDiagram
DiagramLecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Computer animationLecture/ConferenceMeeting/Interview
Lecture/Conference
Lecture/ConferenceComputer animation
Lecture/ConferenceComputer animationDiagram
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceComputer animationMeeting/Interview
Lecture/ConferenceComputer animationDiagram
Lecture/ConferenceComputer animation
Computer animationLecture/Conference
Lecture/ConferenceComputer animationMeeting/Interview
Lecture/ConferenceComputer animation
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/InterviewComputer animation
Meeting/Interview
Lecture/ConferenceComputer animationMeeting/Interview
Meeting/Interview
Lecture/ConferenceMeeting/InterviewComputer animation
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Meeting/InterviewLecture/ConferenceEngineering drawingDiagram
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Lecture/ConferenceMeeting/Interview
Meeting/InterviewLecture/Conference
Transcript: English(auto-generated)
Thank you. Great pleasure to chair this panel. Indeed, remarkable achievements, impressive. I think that's true. That's true. For me, what really was – there have been major changes in recent years. As Mario was explaining, I call it tracing the money.
When we came with the TELTRO, you know, funding for lending, wanted to go where the money was going, and having access to data from individual banks anonymized, you know, under strict conditions, was one of the key information input that we had to see, you know, does it work or not work. And so this is one little example about many
others. And I was very proud because I remember when you asked me to defend a case in the governing council some years ago that it would be useful, and we didn't know at that time that we went to enter that sort of instrument of monetary policy. So when the decision
came much later, we had some of the information tools to see a little bit what happens with the money we lend to the banks. I see some other colleagues here beyond and others here who were behind that project here. So great pleasure to have this first session chair. It's data needs, so we look at the future and monetary policy. We will start
with Jan, Jan Smet, governor of the Central Bank of Belgium, National Bank of Belgium. Jan, we will follow in the order here with Pablo Garcia, executive board member of Central Bank of Chini, also quite a name also in economics, but also in statistics
in particular. And then Ewald Novotny, Central Bank of Austria. And then our discussant is Paul Mortimer-Lee, obviously not present now, according to the program, Natasha. Thank you very much, Natasha. Natasha just joined the Central Bank as deputy director general
monetary policy. So thank you, Natasha. And I didn't force Natasha to come here. You did it with pleasure, so thank you. Paul Mortimer-Lee is sick, really. He has a good excuse, and it's not too bad, I heard, but he really couldn't make it. So that's Jan. Maybe you
start now, and it's about 10 minutes, I think, presentations. We have PowerPoint presentation, and then we will open the discussion. Looks like, yeah. Is it? Okay. It's okay. It's okay. No, dear Peter, dear friends, dear colleagues,
ladies and gentlemen, thank you very much for inviting me to this conference, and especially
this particular session, as it's about interactions between statistics and the future of monetary policy, and we know that this is a hotly debated topic. You will agree, let's see, yeah, you will agree that some important and probably
interrelated things have been happening over the last decade or so in the monetary policy landscape, not only have central banks broadened their toolkit to tackle the financial crisis, but also the economic world has been faced with structural changes, and
besides, both the use of new instruments and the availability of new data have spurred advances in monetary policy research. Now, the interplay of these phenomena could potentially have, I think, serious implications for the way we think about monetary policy
going forward, be it in terms of objectives, instruments, transmission channels, or data monitoring. But, and that is, so to speak, the message I want to convey here, to avoid drawing overly hasty conclusions, I think this requires careful reflection, and in
some cases implies, indeed, new data needs. Only after careful investigation of the issues at stake can lessons for the future of monetary policy be drawn. In my remarks today, I will focus more specifically, as you see, on three phenomena that challenge
our traditional thinking on monetary policy. First, the role played by heterogeneity, which has been clearly demonstrated by the use of new monetary policy tools, and typically, indeed, as the President alluded to, more targeted, and something which is increasingly
documented in economic research and data. Second, digitalization, probably one of the most notable structural economic changes over the last few years, which also opens the door to a new world of big data. And third, the changing role of the financial sector. I intend to raise a number of questions to foster the debate and hopefully help
to structure our thinking. How exactly have these challenges called into question the consensus on which monetary policy has been based? How can new data help to identify where and to what extent the practice of monetary policy has changed? And more
speculatively, can new data shape a possible new normal for monetary policy? And needless to say, I do not want to provide any definite answers to these key questions. They have far-reaching implications, making it unrealistic, I think, to settle all this on this panel.
But before looking ahead, let's first look back a bit. Data that central banks traditionally look at are broadly tailored to the new Keynesian model paradigm, with a view of the world, and I'm grossly oversimplifying here, not doing justice to macro modelers
nor to policymakers. The central bank operates in a framework where representative agents interact, where production is labor-intensive and where the role for financial factors
is limited. Building on rational expectations and sticky prices, inflation is driven by expected inflation and the anticipated change in real marginal costs, the so-called new Keynesian Phillips curve, which links economic activity and inflation. And in this setup,
monetary policy should aim for price stability, and doing so requires bringing aggregate demand into line with the potential output path. And the prime way to do so in these models is by steering individuals' intertemporal choice between consuming today versus tomorrow.
And this gives a key role to interest rates, where the working assumption is that the central bank steers perfectly the interest rate that is relevant for the representative agent. The careful monitoring of macroeconomic aggregates and their projections successfully
support monetary policy decisions in this view of the world, in which it seemed, I would say, fairly appropriate until up to about 10 years ago. But like I hinted in my introduction, several developments might have called into question this fairly simple framework. So
let me indeed focus on the three challenges to the standard practice of monetary policy and their data dimension that I just mentioned. And the first one, as I said, is heterogeneous agents. The appropriateness of representative agent models has been challenged quite strongly
since the crisis. For instance, some people have claimed that monetary policy tools aimed at stabilizing macro aggregates have harmful side effects on specific sectors or
types of economic agents. The allegation that asset purchases increase wealth inequality, that a low-rate environment punishes savers, or that easy monetary policy facilitates the survival of zombie firms are just a few examples. But are we only talking about
possible side effects of some measures here? I think these reflections are a broader indication of how heterogeneity can also be a transmission channel for monetary policy. And going one step further, it could appear that monetary policy works more via the cross-section
than via the time dimension, which is a traditional New Keynesian intertemporal story. To put it bluntly, could it be that an interest-rate cut has a bigger impact on aggregate demand because it shifts income from creditors to debtors who stand ready to spend, rather
than via intertemporal substitution? Micro heterogeneity and distributional aspects already appear on the monetary policy stage, and they are backed by advances in theoretical research. Brunner, Meyer, and Seneca, for instance, argue that targeted monetary policy leads
to redistributive effects that help mitigate financial frictions. And I think, indeed, credit-easing policies are an explicit example of that, since specific types of lending are being supported. Newly developed heterogeneous agent New Keynesian models, the so-called HENK
models, also help to get essential insight on monetary policy transmission channels when the assumption of representative agents is abandoned. Such models suggest that forward guidance could be less powerful than conventional rate cuts because of liquidity-constrained
households, for instance. Now, this trend of research would benefit to come to the topic of this conference from additional data to help rigorously test these theories also at the U-area level. For sure, extra data at a fairly granular
level with a panel dimension to capture effects over time as well are of interest here. Micro data from the Household Finance and Consumption Survey are already a step forward, and that effort should be continued. For example, these data have allowed researchers
at the ECB to mitigate concerns that the APP benefits the wealthy at the expense of the poor. Other Eurosystem data initiatives, like the one the President mentioned, Anna Credit, also are very useful, for instance, to study the extent of zombie lending and
how it interacts with the monetary policy stance. The second challenge is digitalization. And as you all know, digitalization of society dramatically changes our lives, how we produce, work, trade, consume. So what are the consequences for monetary policy?
I shall mention two interlinked dimensions here. First, digital products and services raise issues with measuring the general level of macro aggregates that central banks typically look at. How to adequately capture quantities when, for instance, Netflix or Spotify memberships
allow unlimited consumption of content? How do we determine potential output in such economies? And what about measuring consumer prices for digital service providers such
as social network platforms? Second, technology challenges our understanding of price dynamics. Is price tick and is still relevant for digital transactions? How do prices behave when the marginal cost of producing more is very small, even close
to zero? Addressing all these questions is no easy task. Overall, digitalization complicates our understanding of the transmission process from extra output to inflation. And this has implications not only for the way we model the economy, and here I am thinking
about possible adjustments to the new Keynesian Phillips curve, but also for the role we devote to monetary policy. Should monetary policy set different objectives if prices are highly flexible and the costs of inefficient price dispersion are much smaller than previously assumed? Too early to tell, of course, but definitely worth an in-depth investigation.
Meanwhile, and turning back to the data issue, I welcome advances made in measuring macroeconomic aggregates in the digital economy, in particular consumer prices. Across
the Atlantic, the Billion Prices Project and Adobe Analytics data are promising examples of that. They provide tentative evidence that U.S. inflation could be overestimated, although these results seem to depend on the data set used. At Euro area level, National Statistical
Office's initiatives on integration of online and scanner prices into HICP measures, as well as the Eurosystems choice of investing heavily in research on price setting using microdata will certainly help, too. And while digitalization challenges are thinking
about macroeconomic accounting, it can also provide a whole new set of granular and at the same time multidimensional data. In that sense, big data can become our ally, and I will briefly come back to that point at the end. Before that, a word on the third challenge, which is the changing world of the financial
sector. Relating to the changing nature, I would say, in the world of financial intermediation, well documented in a research area that exploded, I think, during the last decade. We have not only witnessed greater fragmentation within the banking sector, which has forced
us to take unprecedented non-conventional measures to preserve a smooth transmission of monetary policy. We are also observing a slow moving tendency towards a larger role for non-banks in the financing of the economy. With the Capital Markets Union, a project
we fully endorse, the role of players outside the traditional banking sector will hopefully get bigger. And this justifies particular vigilance on the part of the ECB to be ready to monitor developments in this area. We should also make sure we are able to monitor developments in so-called private virtual tokens that aim to play a role as many, even though
I tend to think that these developments are not or not yet of macroeconomic relevance. And related to this, the fintech revolution blurs the traditional boundaries between the non-financial and the non-financial sector. When such things are becoming more relevant,
monetary policy transmission can profoundly change, and monitoring the traditional financial indicators can turn out to be inadequate. Therefore, good data coverage of new trends in the financial sector is essential. And fortunately, again, the Euro system plays
a proactive role here, and I would like to give two examples where new data play a key role. During the financial crisis, a Euro system-wide effort was launched to exploit bank-level data underlying the money and credit aggregates that are monitored in the ECB's monetary analysis. And that way, as the President said, the
Government Council could assess in a fairly granular way the transmission of measures via the banking sector. The data also proved key for calibrating the details of targeted loans we started giving to banks back in 2014. And thanks to money market statistical reporting, which I recognize is a huge statistical challenge,
we also have a better view on the workings of Euro area money markets. Moreover, it enables Euro system to provide for a backup risk-free benchmark rate should currently available private benchmark rates cease to be published. In this respect, it is very good to see how new economic realities are being reflected
here. Contrary to the current benchmarks, transactions with non-bank money market participants could be included in this new benchmark too.
So to conclude, the three challenges I raised today may not only imply extensive use of existing microdata, but also require further efforts to exploit the new work of data opened up by digitalization, the so-called big data.
I do not intend to elaborate much on concrete applications and challenges that come with big data. And these aspects will certainly be more deeply tackled later today in a later session of this conference. But that said, I think technology-driven data brings serious challenges from a practical point of view. Above all, because the granularity
is multi-dimensional. As correctly stated by Andy Haldane from the Bank of England in a speech he gave earlier this year, it runs through their volume, the cross-section dimension, their velocity, the frequency, and their variety. And one needs efficient
data analytics tools to use data properly while being aware of their limitations in terms of privacy and confidentiality. To wrap up, ladies and gentlemen, dear friends, the challenging future will be how, I think, to translate the changes from new data into concrete policy implications. After
all, the micro evidence has to add up to policy advice for monetary policy, which is a macro policy with a rather limited set of instruments. And therefore, I think that in some cases, other policies, such as micro prudential, fiscal, or structural
policies could be more appropriate for tackling the challenges that new data reveal. I stop there. Thank you very much for your attention. Thank you. Thank you, Jan. That's a very good start for this conference. And as
you say, I mean, we are confronted always with the speed of technological changes and the capacity to adapt, because that requires big investments and big choices in budgets, for example. And it's always difficult to say, think about Anna Credit that you mentioned. It was a big investment. The return will come over time. And these are always difficult
choices, given the speed also of the technological change. I think it was a very good start. Pablo, many things will come back. I think one in particular, also the measurement of prices more particular. I think Evelti will also deal with the measurement of the price of prices in general, of inflation, actually.
Thank you, Pablo. You go to the podium. Well, thank you. Thank you very much. I promise that I didn't see Jan's presentation before titling mine. But as Peter said, many topics will be recurring here. During the last years, obviously, we are seeing
some challenges from digitalization. There are profound disruptions that we are witnessing and that will continue to be present automation, artificial intelligence, and new business models such as the sharing economy. They are starting to shift the way we interact with trade, we consume and we live in general. It's interesting to note that this
is happening not only at the core of the advanced economies, but it's something that is quite prevalent in a number of economies. Most economies are shifting the boundaries
of what here is shown as their level of digital evolution. This, on the one hand, make the proper measurement of economic and financial phenomena more challenging. The traditional framework, as has been pointed out, emphasizes transforming basic
inputs into final products by a representative agent, for instance, a firm that delivers a homogeneous good to a household. However, in the present day, digitalization and financial innovations put into question increasingly the usefulness of this paradigm. The digitalization of economic relationships leading to, as I said, the sharing economy make a clear-cut
distinction between producers and consumers harder to pinpoint. The value added of the sharing economy is generated by the match and therefore we need to distinguish the contribution of both what was traditionally a consumer and the producer in the sharing economy.
Also, the diverse way in which individuals and firms can organize themselves into economic activities renders the traditional view of the representative agent a bit more obsolete in terms of describing aggregate behavior. Another example is that the bundling of experiences
and demands for goods and services, which in the past were provided by well-defined products, today make the identification and measurement of the prices of these goods and services and hence inflation much more difficult. It is amazing how smartphones, for instance, have today bundled
a number of products that merely 15 or 10 years ago were provided by very well-defined different goods. On the other hand, the transition mechanism of monetary policy is also likely to be shifting. There's one key channel of transmission that is through asset prices
and credit flows, particularly the exchange rate and cross-border capital flows. It is noteworthy how the globalization process, which began in the early 90s, has resulted in a rapid and unprecedented integration of a disparate set of economies into global
finance. This is likely to continue because the rise of mechanisms to facilitate cross-border lending and skip regulations such as crypto asset is likely to make into the future these types of transmission mechanism stronger and harder to control even by the most determined jurisdictions. The dispersion of economic activities across jurisdictions also highlights
the importance that global value chains into global production and the decentralized use of knowledge and intellectual property obtained in a centralized way through R&D. For instance, according to international evidence, with globalization and the development of these
global value chains, exports are increasingly composed of imported inputs, a phenomenon that is closely linked to digitalization will clearly be pushed forward through the use of blockchain in letters of credit, which is an incipient phenomenon. This will obviously,
on the point of view from central banks, try to calibrate the transmission mechanism, for instance, of monetary policy to the exchange rate through economic activity through net exports, will need to revisit that calibration to take into account these features. Therefore, digitalization poses significant challenges going forward for the measurement
of economic activity as well as for the transmission of monetary policy. Some of these challenges are very present in the near term. Others will be more pressing in the distant future. In any case, monetary policy authorities, particularly those that follow inflation targeting, will need to be cognizant of these developments to understand and calibrate
their policies to achieve their goals. In terms of the need for generality or heterogeneity, there are some challenges that need to be undertaken in the most immediate term. I believe that tackling these will help in the future on the oncoming challenges from digitalization.
These relate to the need to increasingly incorporate into the assessment and design of monetary policy the granularity of economic behavior. There are a number of examples, and I want to highlight a few of them. In terms of prices, obviously an adequate response of monetary policy to shocks depends on the proper understanding of price dynamics. Consumer price microdata reveals everywhere
and in particular in Chile a highly heterogeneous price setting process in the economy. Here I show the case of my country of Chile. In spite of inflation being well anchored in the target, our target is 3 percent. Average inflation for the past 20 years has
been 3.2, quite close to the target. And we aim to keep it most of the time between 2 and 4 percent. Interestingly, it has been between 2 and 4 percent probably 50 percent of the time, so almost for most of the time. But it's very volatile. How to understand
this volatility and how to see whether it affects or not the achievement of our target is quite the challenge for us going forward. In terms of the labor market microdata, it is worth mentioning that measuring labor's contribution to output has become more complex
by this new development, such as increased labor participation by women, significantly more school years, immigration, self-employment into services, and also technological advances that are pushing out low-skilled workers, among other phenomena.
In this setting, what used to be traditional measures of slack in the labor market, such as the unemployment rate or the employment levels, have lost significant. This has been a feature of the economic landscape in a number of economies after the Great Recession. We need to understand the changing role of the labor market in driving wage and
inflationary pressures that should come from the availability of more timely and granular information, encompassing not only demographic and employment characteristics, but also the link to labor market outcomes in terms of wages. I have to say that this is obviously important not only for monetary policy, but goes well beyond that in the realm of
public policy in general. In terms of financial data, this has also been highlighted. Borrowing structure stands out as other relevant elements for monetary policy, underscoring the need to close untimely monitoring of credit sources. The reliance on average behavior proved woefully inadequate in the run-up of the Great Financial Crisis. We will need to
be forever cognizant that credit events in very narrow slices of the specific markets, for instance, such as the subprime loan, had eventually systemic implications unexpected for the global financial system. So therefore, we should always be aware of the understanding
reality in financial markets. There has been major advances in the availability of more and better information from the financial sector, but obviously this needs to be pushed even further. Identifying the recipients of financing for today's monetary policy requirements is not enough. It is necessary to characterize their behavior, solvency,
and vulnerability. For instance, in the face of an economic slowdown to predict the effect on business spreads, employment, and salaries. A balance sheet approach that highlights the fact that macroeconomic policies adopted in response to shocks may be constrained by domestic balance sheet mismatches. For example, tight monetary policy aimed at preventing
an excessive real depreciation may protect balance sheets with large currency mismatches, but create further pressures on balance sheets that have significant maturity mismatches. Having this into consideration is important, especially in times of financial stress.
A further example of the need for more granular data is on cross-border spillovers of monetary policy. They have been at the center of international policy debates, particularly since the onset of the global crisis. Understanding the channels through which one country's monetary policy affects the international economy is an ongoing research agenda, including spillover via internationally active banks, and the BIS obviously provides
a very good source of information on this area. However, the rise of alternative mechanism for financial intermediation is likely to deepen over time. It can be noted that in those jurisdictions where crypto assets have become more popular, and where authorities
have shown a more restrictive approach to them, are also those where overall controls on financial integrations are more acute. This points towards a future where the ability of different jurisdictions to impose controls on cross-border capital flows will be diminished
compared to the past, and this will also likely increase the mechanism of transmission of monetary policy through cross-border capital flows. Let me finish with some remarks on the potentials for merging administrative data to improve our understanding of monetary policy. The availability of data for monetary policy
conduct could be improved, but simply increasing the volume may not be enough to significantly improve policy effectiveness. Having more information at hand can help make timeliness and timely and appropriate decisions, but only insofar as those responsible for monetary policy are able to correctly analyze and interpret this data.
New data sources open new opportunities, and one particular area where this is true and where countries lag behind their potential is in the availability of high-quality merged administrative data. Due to confidentiality issues and interdepartmental bureaucracy within different branches of the government, large data sets, often censored, capturing different
aspects of firms and household behavior, remain isolated and of limited use. For instance, and going back to the question of price dynamics at the micro level, tax records may provide very useful information about the behavior of margins at the individual firm level. But this analysis could greatly be advanced by merging tax data with customized
information to record cross-border commercial transactions. For instance, to understand the pass-through of exchange rates to inflation. For the labor market, it's the same thing. Merged data can also be used as a powerful tool to inform our policy decisions to the
extent that it is available at high frequency. In the case of Chile, there's a good example that comes from a recent law that has required all firms in the economy to conduct all their inter-firm invoicing of value-added tax electronically. This allows the tax office to receive billions of purchase sale invoices between all firms operating the economy every
single day. By itself, this data can be treated to provide good real-time proxy of economic activity, since value-added can be extracted directly from every single transaction, and also for inflation, because invoicing records provide both quantities and prices separately. However, merging this data with additional sources such as bank records could
be used to provide early warnings about systemic events in some sectors of the economy, since it would include not only the complete network of transactions between firms, but also the exposure of banks and financial institutions to individual firms and or particular industry clusters. Let me close with some remarks, concluding remarks.
The use of the representative agent framework for implementing monetary policy and for measuring the micro economy obviously served well the central banking profession for decades. However, the increased digitalization of economic activities implies that not only the measurement of economic relationships such as output inflation and demand become
significantly more challenging, but the transmission mechanism of monetary policy itself will also likely shift. Central banks need to be cognizant of the difficulties these trends pose for the achievement of their objectives, and they involve short, medium, and long-term challenges for statistics, research, and also model development.
In the more immediate future, a fruitful approach stems from the merging of admittedly data, and we have, as a central bank that also constructs national accounts, we are heavily invested in that. On the one hand, this will provide an enhanced granularity in assessing economic behavior by heterogeneous agents.
On the other hand, the need to understand and process this data by itself is a very good stepping stone for further challenges that will come down the road from big data, big data, and digitalization in general. Statistical agencies and central banks, I believe, are adequately placed to preserve the integrity and anonymity of reporting entities,
whether they be households, financials, or non-financial corporates. This care for privacy is, however, a topic that in itself will deserve a deeper study. Thank you very much. Thank you. Thank you. Thank you very much, Pablo. Also for bringing the international
dimension, but also for the rest of it. Thank you. Ivan? Ladies and gentlemen, dear Peter, the title of this session is New Data Needs for Monetary Policy. And of course, this title could
lead to the temptation to produce now a long wish list. And economists, of course, are and should be data-oriented. So it would not be difficult to compile such a long wish
list. But economists are also trained to think in cost-benefit dimensions, to balance merits and costs. And providing new data may involve quite substantial costs, costs for all concerned. And in fact, we are exposed to some criticism in that respect. So I do
not intend to talk about, let's say, wish lists, but rather about new perspectives that means not necessarily add-ons, but to talk about new products that may substitute
old ones. And to start with a very basic problem for central bankers, I want to make some short remarks on problems of measuring inflation in a globalized world. So we do
have, of course, the classical problems of inflation measurements. These are the four biases, product substitution bias, quality change bias, new product bias, outlet substitution bias. And there have been new statistical methods introduced to reduce these biases.
So annual updates of consumption baskets. In fact, this is what we do in Austria since 2010. Quality adjustment of prices, so the oldest field of hedonic methods, we know quite sometimes not so easy one. Frequent adjustments of the surveyed outlet structures.
But there have been two big challenges in the last 10 to 15 years. The effect of the internet on prices and inflation, and enclosing the costs of owner-occupied housing
in inflation measurement. And I want to make some short remarks on both of these. With regard to the effect of the internet, or in general, digitalization, and Jan already dealt with this to some extent, I think it's quite interesting to see this, just
these pure statistical facts about the substantial increase in the use of e-commerce in the Euro area. So the red dots are for 2003, and the blue lines is 2014. So you see practically
everywhere, there had been a huge increase. But what is also interesting is that we do have quite substantial regional differences. So we have very high percentages in Ireland,
Luxembourg, Finland, and the lowest one, Greece, Italy, Latvia. So you see the same if you translate it into the percentage of individuals ordering goods and services online. This is, again, has risen substantially. And again, it is different among countries.
So the question is, what are the effects of these very substantial changes that we have in economic structures? And the question was, would this lead to a tendency that we can
have lower prices due to this e-commerce? So it could be because we have saving costs for wholesalers and retailers. And this might be passed on to consumers. It might be, it might also mean increased profits by some of the agents. Or we could have a mean of obviously
increased competition, transparency. The question is, is this something that is only temporarily or is this something that is really a massive structural change? There is quite a number of evidence and a number of studies on the effects of e-commerce on
consumer prices and inflation. But one has to say it is not really conclusive. We have quite different fields. And if I just only restrict myself to the latest one, a very
encompassing study by Cavallo, so that there is not really a clear indication that we have differences in online and in offline prices. So this is kind of an open field. But it may be one of the fields of why we have this general topic of persistently
low inflation rates. The other topic that is relevant and is much discussed is the integration of the costs of owner-owned occupied housing in the HICP.
And the basic question is, of course, do we see housing as a consumption good and then included in the HICP as consumption expenses? Or do we see it as an investment good? And then, of course, it would not mean to include it. And as you see, we have according to the
legal frameworks that we have, we have in the HSBC, of course, no asset price elements. We separate if we have owner-occupied house price indices. There we have again some questions.
What does it really mean? How to include land prices, this clearly asset prices, which we may have more difficulty to assess. And we see that we have different approaches
in different countries. So we do have owner-occupied housing included in the US approach for inflation measurements. Currently, we do not include it in the HICP. But by the
end of 2018, the European Commission will assess the suitability of integrating owner-occupied housing in the HICP. And then the question, of course, is will this make a substantial
difference? And we have tried to look at this as far as we can with the data we have available. And to make a long story short, you see the effects, just if you look at the euro area, are not very dramatic. So, yes, it is something that, of course,
also if you want to compare with the US, so might make a difference. It is also not systematically, so it's not that it varies, let's say, with the business cycle. But it is a difference.
It leads to slightly higher inflation rates, as you see, but not really to a very, very high amount. So just to give this as two examples, where there is, of course, a clear economic policy discussion. If I may come back to the beginning and to a more general point.
So, are there new data needs for monetary policy? And if we look at it exclusively for monetary policy, John Smith showed, and also Pablo Garcia, a number of, of course,
new questions, new fields, so non-banks, of course, technical developments with regard to big data and so on. But if I look at, let's say, the banking side as we have it,
it may not be the consensus view in this room, but I think, I dare to say that from my point of view, we do not see really a great amount of new data needs for monetary policy. What we see is that we have a rich data stock, which should be exploited first,
and we should try to focus on using data for multiple purposes. We have numerous requirements from various business fields, just different demands with regard to one and the same data stock. So let's merge these demands to gain efficiency
and save costs. We see that data requirements have exploded within recent years, and collecting large data sets for one single purpose becomes increasingly hard to justify in the future. So therefore, I think the challenge and the
perspective should be to harmonize, to cooperation. And what we see, of course, unfortunately, just now is that a large number of international organizations each follow their own agenda. And this, of course, drives up the volume of data requests,
and we do have, as you see here, some of these institutions. So I think there is a large room for improvement here. We have, therefore, we need more cooperation, more coordination,
to assure that these international data requirements are harmonized and thus are also limited. I think it is not the number of data points that is triggering excessive reporting burden,
but uncoordinated methodological approaches together with suboptimal operational reporting channels. And I think ECP and AOPA provide a best practice by merging data needs for both monetary policy, the ECP statistics relation, and supervisory purposes, so Solvency II,
in one single requirement, for instance, for insurance cooperation statistics. So therefore, I think this is to come up with this, of course, main topic of this conference,
I think, concerning granular data. Yes, I think we can use granular data in a cost efficient way and serve multiple users, because I think it is very important that they can provide a value added for a monetary policy and help, but we have to avoid
duplication. So therefore, well-structured, well-structured granular data are an important ingredient for central bank statistics to master upcoming challenges resulting from an
increasingly complex and fast-changing global economy. And we have just heard about some examples of these changes. So granular data may offer more flexibility, enable us to calculate any aggregates autonomously for various purposes. So you have
one kind of general set for this. And of course, it may also reduce the cost of implementing new statistical requirements in the future. And we have to be aware reporting burdens, looking at it from the side of a bank, are fixed costs to a large extent. And therefore,
of course, they are a relatively heavier burden for small banks than for the big ones. So if we have this discussion on proportionality, I think this is relevant for the statistical side, too. And also, like President Draghi referred into in his introduction,
I think in the US, we have perhaps achieved a more efficient distribution between this various types of banking sizes. And clearly, I think this cost element is something that
is what's also the reason why some of the proposals submit quite substantial criticism in the financial world in Europe. So I think what we have achieved is under credit. Under credit, I think, has been a big success or has a lot of chances.
It promises to have a high return on investment because it has a powerful granular data stock. And this allows to tackle a broad spectrum of current and, what is important, future
data requirements. So that means that it will replace inefficient and expensive occasional services dealing with data. It offers the perspective to harmonize international reporting requirements for loan data, for instance, in the long run. And this
is a relief then for the reporters. And of course, it unfolds a long-lasting and steady return of economic insiders in the near future on some of these problems that just have been said before. And I want to add because this symposium today is, I think, the last one that
will be chaired by Arnold Schubert. I started a general of statistics and I would really think it is worth mentioning that under credit is one of the big achievements that have been achieved
also with your help and under your chairmanship. So I think this is also a good path to mention this. Of course, and as I said before, in all cases, it's about balancing merit and costs. So that means if new data requirements emerge,
then, of course, first use existing data sets. And that means there has to be some degree of flexibility. So I think this is important, otherwise costs will explode. Check if an existing data set can be extended. And thirdly, and only if it is unavoidably,
consider to set up a new data structure. So to balance merits and costs. I think that the ECP, we have this merit and costs procedure. This is an example for an intelligent and
structured framework, which takes account of user needs and balances against expenditures. But as we all know from our own experience, this is a discussion that has to be done in any specific case. So this is not a once for all, but we have to do this again and again. And I think we owe this to, let's say, our customers or to the public whom we have to serve.
So therefore, the key messages is that we will have, of course, major challenges in inflation measurements, which is due to digitalization, treatment of own occupied housing.
We have, of course, the effect of e-commerce and so on. Up to now, as we see, perhaps a bit counterintuitively, the effects seem to be not so very substantial. We should reinforce multi-use
of data, make increasingly use of what already exists. I think what is really important is coordination and cooperation between institutions. And, of course, that when we talk about data requirements, always balance merits and costs. So I think that, as the president said,
this is, we've had 20 years of successful work for ECP statistics, and I'm sure there will be another 20 years of successful work, and I wish all the best for that. Thank you.
Thank you, Evel. Also on this other dimension, merit and costs, which was not in the previous presentation, and I'm sure that will come in a discussion. Natasha, I don't know, what is your take from this? And thank you for having taken that. Sure, I don't understand that,
but I have a few slides where I... Yeah, yeah, yeah, that's fine. So someone might need to... Thank you for asking me to step in. So I have the excuse of having known that I would be here
since 24 hours about, so you will be, you know, clément, that's what I'm going to say. But I had the privilege relative to Pablo that I could see all the presentations of each other. So what I decided to do was to make a quick transversal summary of the takeaways I got from
your speeches and material, and then I pinpoint two of the examples that some of you mentioned as usefulness of, A, granularity in data, and then a deeper knowledge of money markets and the underlying dynamics in what connects the central banking community with financial
intermediaries as a whole. Before making the summary of what you all said, I wanted to recall that there was a time which is not so long ago, I think all the economists that were trained from my generation, so not the youngest one, but not the oldest one either, where when you
were studying monetary economics, you were basically told that with GDP, CPI, and the interest rate, you could go a very long way into understanding how the transmission mechanism for monetary policy was working. And so those were the good old days or the bad old days, and I think the whole discussion is about what have we gained and what are we gaining relative to those
approaches which were complemented by approaches or approximations of monetary policy preferences by loss functions and Taylor rules, and Taylor rules have been the game in terms of central bankers for a long time, even still now to some extent. So now, and this is what you collectively
said, the economy is changing, the structure of the economy is changing, the context in which monetary policy is conducted is changing as well. The policy tools have evolved. The scope of monetary policy to a large extent has evolved, at least in advanced economies. I mean, in
emerging economies, and I think Pablo's remarks were very complementary to what has been said by Jan and Evel, but I think in advanced economies, we moved into a new world with new tools and a combination of tools that yields new results. And then the third point is that technology will not wait, and it will, and it should help central bankers as any other policymaker and
any other economic agent through IT improvements, through artificial intelligence, and through the availability of data. And this is something which I think the central banking community seems to be aware and really up and running on those advances, as opposed to what sometimes
is the public perception of public sector entities. I think central bankers are leading here. So I thought your views along those lines were fairly converging and complementary. Some of you put more emphasis on some dimensions. One key area for monetary policy that comes out
of everything you said is inflation. So Evel spoke more than the others on inflation, but the measure of inflation is at stake. We need to, and perhaps we can better than before, follow the data generating process behind price developments more accurately, but the DGP has
very likely changed. That's what Jan said and Evel said. Digitalization happened. The basket has changed. The example of owner-occupied housing is a very good case at point. I'll come back to it later. To me, it exemplifies the categorization into consumer
goods, durable goods, and capital goods. Those categories have changed for the mere fact that Pablo was underlying, which is the sharing economies is coming up, and the intensity of use of what used to be considered as capital is changing the nature of capital. If I have a car
and I let it sit when I don't use it, it's a durable good. Now, if I use a car and you're using it 10 minutes after me, and this is a 24-hour process of using the car, the car is becoming something else than the capital goods or durable goods. So this has to be taken into account in price dynamics, and I think it points also to something that Evel was mentioning in terms
of legal constraints and the definition of CPI, which I'm not familiar with, but the fact that we cannot include asset prices in CPI. Is it something of the past? Do we want to conform our measurements to legal frameworks that might need some amendment? So I leave
that open for the discussion, but I think the question was interesting to ask. Now, so as a result of all those changes in the DGP, the value chains are changing, the value added is captured differently, and so you asked the question rightly, so what is the impact on
final prices? We should expect a decline in price dynamics or a slowdown in price dynamics, but the fact is when you ask firms, and I've had the chance to look at this from inside firms before, they see their own share of value added being eroded, but this value added is captured by actors that are acting in a monopolistic context, and this market structure, as long
as it lasts, will probably prevent price dynamics from reflecting technological changes and improvements in the productivity that should lead, otherwise lead to more muted price dynamics. You may say it's a good thing for...
central bankers today who are looking for inflation, but this is something to be taken into account, this market structure argument. Right, so that was the point. Then there was, and that's my last summary of what you said, you had points about methods and best practices, the way data collection
needs to be conducted. I'm not an expert at all here, but I think the general message is best practice principle for data collection is being parsimonious and to consolidate as much as possible data requirements. And now that we've had really a huge injection of needs for regulatory purposes,
it would be very difficult indeed for actors who have had to swallow those data reporting, new data reporting requirements to have even more data reporting requirements for monetary policy purposes. So given those constraints, the ideal world for data scientists and economists would be
taking the existing stock of data as it is, and here I come back to what Eveld was saying in terms of cost efficiency and not being willing to renew data sets every other day. What is badly needed now is the ability
to merge and to match data sets. I think a huge work, and I know a lot of work is being done for this in terms of matching identifiers, making databases speak to each other. I think here there are huge efficiency gains that can be made for the end user of statistics. Now granular data has been a very common topic to you all.
It has lots of advantages. You've highlighted that in terms of quality, flexibility, fungibility comes back to the matching data sets argument. But it also matters for addressing a new set of economic issues which include heterogeneity and distribution, and as far as monetary policy
has an impact on distribution and relies on heterogeneous transmission channels, this brings in new information that we were not able to address with a triptych, CPI, GDP, and interest rates. So I give the answer a bit, but this is also something for discussion.
Now between granularity and aggregate macro data, we might be, you know, it's a bit of a balance here. We are so happy to be able to have those granular data that we want to have the most of it. Now we shouldn't forget that maybe the sum is sometimes different from, you know, the total different from the sum of the parts.
And so we need probably to keep both approaches to have a comprehensive and a holistic view, like a good view about the transmission of monetary policy but also economic structures. Now I have a wishlist because I surveyed yesterday the staff who's doing monetary policy here
and they had, you know, males coming back with, you know, very granular lists of we need this monthly, we need this in terms of stocks and flows. So I won't read that all. I haven't put that all on the slides, but this, you know, wishlist is out there and there's a kind of shopping list through which I think it's worth going.
I summarized it in four points, having a full cross-country matrix and I completely agree with that. Any kind of network analysis or general equilibrium analysis will only be possible when we have bilateral flow data of any sort. I mean, for my own research, I will use it for international capital flows
but that is true very generally. As soon as you have to do with a network and the financial system is a network. The question of gross flows and redemptions, so having clean net measures, more granular data on the OFA sector, so of the non-bank sector
and multinationals need to be singled out more in national statistics. This is also fairly a broad base. Now I have more of it, lots of praise of an accredit, so really an accredit, if there's one thing to be underlined in terms of usefulness of the investments that have been made, it is that one. It was the rant of some economists
who were able to approximate itself but now that it will be available to everyone, it will really create a lot of research and hopefully good research on it. The last point of this slide, exposure to crypto assets. Someone mentioned it and I think maybe we should keep an eye on it, not because it's very small
but because in terms of the link between monetary policy and financial stability lies with crypto assets, very deep and deeply rooted, close to the meaning of money. I don't want to expand on that but this is something fairly key. I just want to add something
that was brought to mind by Pablo. I think in the euro area as opposed to other countries and probably also as opposed to the US, we tend to discard for monetary purposes the international, for monetary purposes, I'm not saying we're not looking at that but the international spillovers of our policies.
A lot of people look at it in this house but for monetary policy purposes, using the international role of the euro in as a reserve currency, as a trading currency, using all those dimensions that have an impact on how our monetary policy transmits into first,
for example, into foreign exchange markets. Sometimes we don't understand whether the euro goes up because of current account imbalances or because of the uncovered interest rate parity. Sometimes it switches and behind this, there's a logic that we might fail to capture because we don't pay enough attention
to this international dimension that emerging economies have had to pay attention to in order to survive as currency issuers. So I think that it's worth highlighting this. I close with two charts. One which is an example of why granularity matters
and how we can use granular data without any modeling effort and what kind of messages we can get here. You have a picture of the evolution of credit and investments by non-financial corporations. This is made on Italy
because it was a pre and a credit world but using the granular data we had for Italy or the authors had for Italy, they looked at the evolution of credit supply to NFCs and investment made by those non-financial corporations in growth rates relative to 2006.
They split the sample into two subgroups and the splitting of the sample was made according to the way their banks were funding themselves. So I'm looking here at the transmission mechanism of monetary policy without causality. It's just visualization of data but the red firms are the ones that are getting liquidity
or getting credit from banks that rely a lot on interbank funding and the blue ones are the ones that relied on banks that do not rely so much on interbank funding. So it tells you a bit the link between money market structure and the transmission from money market to infinite investment dynamics.
And the right hand side chart is basically telling you that those who were exposed indirectly to a lot of interbank funding in those years were the ones where investment collapsed the most and even more so than the credit constraint as it appeared on the right hand side would suggest. I skipped my last slide, Peter,
because I feel I have used enough time. Thank you, Natasha, very pertinent points. We have about 25 minutes, just quick comments also to the discussant, if I may, Natasha. The good old days, of course, these were the days where we had imbalances were building up and we were looking at imbalances
in the labor market, but the imbalances were in the financial market. So, and the data, of course, we didn't have all the data that we would have wished, but it was more the attention. It was not only a problem of lacking data about interconnectedness, but also just sort of focus how you see the world. And so, but that's just a caveat
when you say we lack of data, but it's also the attention to the right problems, not always easy to do. The second, it didn't really come in a discussion directly. It's indirectly in. It is the, when what we use, output gap, potential growth, productivity. I mean, if you ask me where, I mean, I was asked by I think Bloomberg
or one of these, what will be the biggest mistakes that we are making today and that we will discover 10 years from now, I say probably the growth of productivity because we have little clue. Now, of course, we have a GDP nominal and then you have both, the real and the nominals and the price side. And probably there, we don't know yet, but we know there are big measurement problems there.
And that may feeds into, you know, our policy, of course, we don't know yet today, but that's a key challenge. And the last little point I have, well, if I may, a little point is the, which was not mentioned, and that's where we have this ambition to do, which costs money, Ewald. It's what the New York Fed actually is doing is looking at publics, the public,
the general public perceptions on inflation and on monetary policy. It's not only because we have some information on markets, you know, via market prices, but also service among market participants. But from the general public point of view, we have no clue. We have some surveys about, you think price is going to rise faster or not faster. We have the Michigan and the US, these sort of things.
But I think we should also look more about the public perceptions about monetary policy. And the very, very last is also on wages, of course, which didn't come very much. It came incidentally. You mentioned it also, Pablo, but that's also something we have big data on that,
but we need much more given what you said, actually. Pablo, you mentioned that actually. Maybe I ask a very, very quick reaction if you have on the interventions, Pablo, maybe. If there are none, there are none. And then we open, yes, very short. If it's very short, it's fine. So I give a chance for the audience then to react. So Pablo, Jan, Ewald, I think to discuss
and we will observe and we will see. Very quickly, I found all the presentations very stimulating and Natasha's comments also very to the point. One thing that I would like to highlight is that the distinction in this dimension between advanced and emerging economies is shrinking and it will continue to shrink really fast.
One example is that in an emerging economy where you have a weaker statistical base, there are very strong local demands for data. Societies are heterogeneous and there are complaints from marginalized groups or faraway places
that they would like to get a better statistical representation. Of course, that will become extremely expensive. One example is Colombia with their aim at targeting social programs after the peace process. How to do that? Well, they found out that the very good proxy for local GDP was the density of cell phone communication networks.
So that you can get that every 15 minutes. Another example is in the case of Chile, the self-employment is a very good buffer in times of weak economic activity. And we've seen that the amount of Uber and Cabify drivers today is about the same magnitude
of the increase in self-employment. So these are a few ways in which this digitalization is changing the nature of the need for macroeconomic statistics. And finally, on the emerging, I think that it's very clear that it's expensive to have new statistics
and the merging of already existing administrative data is at least we see it as a very good venue to particularly ease the burden on reporting, which is kind of an issue, yes. Yes, yes, if you allow me. Two comments, one on the cost efficiency and one on the impact of e-commerce.
On the cost efficiency, absolutely, I absolutely agree. Cost efficiency is of great importance. No duplication, coordination, merge of statistics. But in terms of cost benefit analysis, the benefit we have to keep at all costs,
I would say in all prices, is to keep trust in our policies and of people in what we are doing. And so there are two things which are key, our objective, which is an inflation figure, and our transmission process. And so we have to be sure that we can maintain
an excellent measurement of our inflation objective. So it probably needs with e-commerce going up that we indeed are adjusting our methods. And in terms of transmission process, if it is true that this direct, let's say, inter-temporal substitution mechanism is, let's say, losing weight in favor of,
let's say, a response which is more moving through general equilibrium mechanisms, if we know that the measures we have taken may have very divergent impacts because we know that a fraction of households
have zero or little wealth and are very weakly responsive to interest rate movements. If we know that wealthy people, instead of consuming more, if we are lowering interest rates, are rebalancing their portfolios towards other assets,
if we know that there are fixed and adjustable mortgage rates across the euro area, I think we should try to see that reality more. And this is really needing more granular data in order to keep confidence, I think, in what we are doing.
Second remark on e-commerce. It's three quick remarks. One, it's true, it will have a downward impact on prices, more transparency, also the impact of technology on other factors of production will reduce prices.
And costs, I think, of robots competing with workers. And so having downward impacts on wages, okay. Secondly, is this temporary or permanent? I think a large fraction may be temporary because there is no reason to think that the fusion of e-commerce
at a certain point of time will not stabilize. So it will be something more in terms of relative price levels. And by the way, monetary policy always can make sure that, let's say, also, that the impact of these relative price adjustments is mitigated through our monetary policies.
And we have always the, let's say, the instrument, the policies needed in order to make sure that the increased potential created by the digital is fully exploited. But the third and last remark is that
there may be also an impact on the natural interest rate of digital and technology. So it may be upwards because it's raising potential growth but it may also be downward because the demand of capital investment goes down. This was an argument which Larry Summers developed, I think, some times ago because you see
that firms like WhatsApp, they have a market value way above many other firms with very, very low capital investment. And that's on itself very challenging for monetary policy, giving the zero lower bound.
So we should then, if that is true, reflect also, I think, about our inflation objective. Thank you. Very briefly, just I think this follows for quite a number of issues. I think, and this is the chancellors of this conference,
it's not just about getting numbers. I think it's getting numbers to the relevant questions. And therefore, I think the cooperation between economic theory and statistical approaches is extremely important. One field where this is quite obvious
and has been visible is, for instance, labor markets. Because many of us have been astonished, so there was always the expectations, well, as you know, the whole Phillips curve discussion and so on. But in labor economics, things have been much more advanced. They knew that you have quite different relationships.
And I have to say, if I remember the discussions, and you have been there also at the BIS, when our American friends spoke, most of their time was about labor markets. But from the point of view of labor economics,
which proved to be relevant, and then you know what are the questions that you have to ask for the statistics. So I think this is to have this. If you allow me one short point. This was not a confidential information, what they're saying is. It's about labor market. Yeah, I hope it is not too disturbing the markets. So one aspect that I didn't want to include this
in the written part. This is trust in statistics. And this is especially important because in the time of globalization, we have to deal with international aspects.
I have been recently being in China. I've had a talk with the IMF representative in China about his view on Chinese statistics. I only say, we talked about it. I don't comment more of this. But we have this also. We still have some problems also in Europe.
And I want to finish with reminding you, and I think this is for the statistics community, still an open sore that we have this case in Greece where the chief statistician still has criminal trials.
And I think this is really perhaps much more important to mention than many kind of technological or technical changes. This is basic. And I think this is something where we should remind again and again an independent work of statistics is essential also for statistics being meaningful.
You know, I think you're right, Evan. That was very important to say. Open the floor for about 20 minutes. Yeah.