Anastassia Fedyk
Description: Anastassia Fedyk is an Assistant Professor of Finance at UC Berkeley’s Haas School of Business. Her research focuses on behavioral finance, firm valuations, and the impact of big data on financial decision-making. In this episode, Professor Fedyk describes how investors process financial news—often overreacting to old or recombined information. She then discusses how firms investing in popular technologies, such as AI, frequently experience inflated valuations that later correct. Finally, we discuss her involvement with the AI for Good Foundation, where she applies AI-driven solutions to humanitarian efforts, including war documentation and economic resilience in Ukraine.
Website:
Publications:
Resources:
Show Notes:
[0:03] Introduction to Professor Feddick
[3:41] Behavioral Finance vs. Behavioral Economics
[4:43] Early Research Misconceptions
[6:53] Reactions to Old News
[9:44] News Placement and Market Response
[11:57] Arbitrage Opportunities in News
[12:43] Time Inconsistency in Trading
[15:00] Overvaluation of Popular Technologies
[20:36] The Impact of AI on Firms
[24:39] Concentration of Market Share in AI
[27:55] Surprising Findings on Process Innovation
[30:13] Regulation and Industry Innovation Dynamics
[32:49] AI's Role in Audit Quality
[34:08] Gender Dynamics in AI Workforce
[35:40] AI Applications Beyond Auditing
[37:08] Introduction to AI for Good Foundation
[41:58] Humanitarian Aid in Conflict Zones
[44:36] Measuring Impact of AI Initiatives
[46:13] Documenting War Experiences
[48:29] Effectiveness of Sanctions on Russia
[50:33] Economic Tools Against Russia
[52:50] Security Compromises and NATO
[1:00:20] Future Research Directions
Unedited AI Generated Transcript:
Brent:
[0:00] Welcome, Professor Anastasia Feddick. Thank you for coming on today.
Anastassia:
[0:04] Thank you for having me. It's a pleasure to be here.
Keller:
[0:05] I'd love to start off by hearing a little bit more about your story. What got you interested in behavioral finance and innovation, and how you ended up at UC Berkeley?
Anastassia:
[0:12] Sure. So I've actually been interested in economics, and specifically behavioral economics, from really early on. As a kid, as a high schooler, I grew up here in the Berkeley area, and I would sometimes come to the seminars in the economics department. And really, the first professor that I met and interacted with, he was a behavioral economist. It was Matthew Rabin, and he really got me interested in the field. So I've always dreamed to be an economist. Went to college at Princeton, majored in math, still wanting to end up as an economist. Then spent a couple years in industry, spent two years working at Goldman Sachs in asset management. So I got some practical experience, and then I was back to the PhD for me.
Brent:
[0:49] And then what led you to start with math if you still knew you wanted to be an economist?
Anastassia:
[0:54] So actually, most academic economists, at least historically, did study math at the beginning. It's quite a technical field. It comes in two forms, either theory, which has a lot of math, basically proofs, or empirics, in which case you're working with data. So also something that's quite technical. So either way, it's really helpful to have that math background, either the theory part, things like real analysis, or the empirical part, things like statistics.
Keller:
[1:20] Did you know you wanted to work back towards academia or when you went to industry, did you think you might want to be an economist in industry full time?
Anastassia:
[1:28] So my most likely path was always an academic, but I wanted to give industry a try. Basically thought it's either now or never if I end up going straight to a PhD, straight to a faculty position that I'll never get to try the industry option. So that was the main reason that I took the Goldman Sachs job. It was still a highly research heavy job. It was in quantitative investment strategies, so very quant, lots of people with PhDs there. The job consisted of half research and half portfolio management. But the portfolio management really was not discretionary because everything we did was algorithmic. It was more checking that our code went fine and that everything made sense. So really everything was systematic and the research was discovering the systematic factors that we were trading on. I gave it a try. It was interesting. It was informative, but it was not for me. So I'm just the kind of person that is most creative in an unconstrained environment. If I can research anything I want, there are a lot of things I want to study. As soon as I have a manager and some constraints, somehow that drives up my creativity.
Brent:
[2:30] And you do that post-PhD?
Anastassia:
[2:32] I do that before my PhD.
Brent:
[2:33] Before, okay.
Anastassia:
[2:34] Before my undergrad and PhD.
Brent:
[2:35] Okay, okay.
Keller:
[2:36] And then was your PhD focused solely on behavioral finance?
Anastassia:
[2:40] No, my PhD was in what's called business economics at Harvard. It's basically a cross department program between the Harvard Business School and the Harvard Economics Department. It's practically equivalent to the Harvard Economics PhD, but with a little bit more resources from the business school. So definitely a very attractive option for anybody considering PhD programs. And as an economics PhD, it was very much broad. So the first two years were the usual courses, micro, macro, econometrics. And then after that, we did fields. My fields actually were in behavioral, in corporate finance, but also in contract theory. Because at the beginning, I thought maybe I wanted to do theory. I ended up going with empirical research. But at the beginning, I tried out everything there was that was interesting to me.
Brent:
[3:26] What are fields?
Anastassia:
[3:27] Those are kind of like concentrations. So within economics, those are going to be the fields that you're really researching the literature in depth in that second year of your PhD, really getting to know what has been done, what are some of the important questions out there.
Keller:
[3:42] And looking into your work after your PhD, could you just define the difference between behavioral finance compared to behavioral economics or whether there even is a difference between the two?
Anastassia:
[3:52] Similar to the difference between finance and economics. Finance is a subfield of economics. Economics generally, it sort of feels like labor economics or macroeconomics, right, or public finance. Finance, the subfield of economics, really is looking at two things. It consists of two main parts, asset prices. So those are the valuation of companies, the valuation of assets, and then corporate finance, which is really the decision making at the company level. So what is the firm doing and how does it work and what are the effects on the operations of the firm from various investments and so on? So those are the sub areas of economics that I would broadly define as finance and behavioral finance, again, would align with that. Oftentimes you're looking at things like mispricings, that would be the asset pricing part of behavioral finance, or you might look at things like overconfidence of the CEO, that would be more in the corporate finance domain. Okay.
Brent:
[4:43] And then, so you started off in behavioral finance. What were some of those major misconceptions you were researching in the beginning of your career?
Anastassia:
[4:52] A lot of my research in behavioral finance was specifically on news. So I was interested in how people, investors, process information in financial news and what are some of the biases in which they process the news. I was looking specifically at institutional investors. So my research was saying that it's not just regular retail traders, the people who are trading in their Robinhood accounts that are having biases or misconceptions. Really, those same behavioral biases, the same psychology applies also to the institutional investors, to the guys who are at Goldman Sachs. They're still people. And so my research focused on those types of investors and their reactions to news. And I looked at various things. I looked at things like reactions to old news. So when the news is getting repeated and people are still trading on it, why might that be? What might be some of the drivers for larger reactions to older news? Things like effects of presentation of the news. Are people reacting differently? The news is being presented more saliently than some other news that might be just as important, but didn't get saliently presented. Or disagreement. Are different investors trading differently because of having different backgrounds? Because they've read different things in the past. They have different priors, different expectations. And then the same piece of information is going to cause them to update in different directions, potentially disagreeing and trading against each other.
Keller:
[6:13] What were some of the findings that you found in that? And I guess before that, when you're looking at the news, what were the distributions of news? Was it just U.S. news? Was it international news?
Anastassia:
[6:24] The news that I was studying was news about specific U.S. companies. So it's not something like the Fed cut interest rates, because that's going to affect everything. But it was something like IBM released a new product. The news that has a specific effect on a specific firm. So that way we can test it by looking at what happened to the price of that firm. So that would be the set of news that I was looking at. And to answer your other question with some of the interesting findings we had,
Anastassia:
[6:52] I want to highlight one paper. It was looking at reactions to old news, because previous studies documented that there is an overreaction, there's a reaction that later on will correct to news that is already known. So news that repeats information from the past. And my co-author and I, we were asking the question, why? What are some of the reasons that this might be happening? And we had a conjecture that sometimes complexity can play a role. If the news is being repeated exactly the same, If one paper writes IBM's earnings were great and another paper writes IBM's earnings are great the next day, that should be relatively straightforward for the investors to tell that, hey, I shouldn't be trading twice on this same thing.
Anastassia:
[7:34] But if the news is being recombined, that might be harder to tell. For example, if one paper writes IBM earnings are great and another paper writes IBM's new product is not doing so well, and then the third piece of newspaper writes IBM's earnings are great despite the flop of the new product, that's something that now takes more work to figure out, hey, I've actually seen the earnings news. I've seen the product news. This new piece works.
Anastassia:
[8:01] News is just combining those two facts. It's not telling me anything new. That is something that we conjectured would have a greater effect on the market, would have a greater likelihood of being perceived as new, despite not actually carrying any new information. And we pursued this question in two ways. First, we actually designed an experiment that we ran on real traders, where we came up with news headlines that were specifically designed to be exactly the same at everything else, same length, same types of words, same everything, and they just differed. Some of them were new, some of them were repeating one story, and some of them were combining facts from multiple stories. We ran this experiment on Real Traders to see how they ranked the novelty of those stories, and we were right. They actually perceived the ones that combined, even though they had nothing new in them, as being more novel than the ones that were more straightforward repetition.
Anastassia:
[8:59] And then we tested the implications of that on stock prices. We used a very large database of all news going through the Bloomberg terminal. Do you use textual analysis to identify for each one of them? Is it new textually? Or does it repeat text from one previous story? Or does it combine text from several previous stories? And then we took a look at the price responses to the forums, depending on what type of news was coming out that day. And we saw, again, our predictions validated in the data that when a firm had more of this recombination old news, it had a larger price response and then a larger reversal of that price response in the coming days, meaning it was an overreaction, the market reacted incorrectly.
Brent:
[9:45] And then kind of piggybacking on the news, you also looked at the difference between placement within the newspaper. Could you briefly highlight what you were finding there?
Anastassia:
[9:53] Yeah. So this is also looking at the Bloomberg Terminal. Why? Because that's a platform that gets used by institutional investors. Our research mostly was on these institutional investors and them still having those same biases. And on the Bloomberg Terminal, there is a lot of news. So it passes by really quickly. But they also, at the company news screen, at the top have three slots where news gets to stick around for a bit, usually something like 30, 40 minutes.
Anastassia:
[10:20] Now, those slots, those kind of top slots, of course, get given to the news that are more important. Bloomberg tries to put the most important news there. But sometimes, when there is a lot of important news coming out at the same time, some of those important news might not get a slot because there are only three. And so that's what we use. It's what we call an identification strategy. We use the news articles that were marked just as important, had all the importance tags as the ones that got a slot, but got unlucky, came out at a time when there wasn't a slot. And we took a look at how did the price reactions vary across the articles that got one of those top three slots and other important news that didn't luck out and didn't get those slots. What we saw is that in the long term, they had the same price response. But the speed of that price response was very different. For the ones that got pinned to the top of the screen, within an hour, the price was already reflecting the information. So investors saw it, acted on it quickly. It was what we call an efficient market response. For the ones that didn't get one of those top three slots, it could take days to get that same response because investors were not seeing them. And so they were taking longer to incorporate that news into the stock price.
Anastassia:
[11:37] So even for those sophisticated investors, for those institutional investors, they're also inundated with information. They have so many news being thrown at them that they're going to respond to things like positioning and they're going to see the news that gets presented
Anastassia:
[11:52] to them more visibly at a higher rate than the news that gets important, but gets buried.
Brent:
[11:58] Were you aware of any institutions looking for arbitrage within the news and being able to specifically target non-headline news? and react to it quicker?
Anastassia:
[12:08] I'm not. And to be honest, this case is illustrative. So it really shows a big difference, but it's a pretty small set of news, right? Because you're only looking at the news that could have been there, but weren't. So that's a pretty small set of news, really kind of selected to have that clean identification. And so in the paper, I do a back of the envelope trading strategy type estimation. And the answer is yes, you could make money on this, but it's not that large. So depending on your transaction costs, it's not a huge money-making opportunity.
Keller:
[12:43] And then looking at some of your other work around the trading space, some of your earlier work looking at misconceptions in trading, did you speak on some of those?
Anastassia:
[12:53] The misconceptions, I think?
Keller:
[12:55] Misconceptions in terms of people's ability to account for time in trading?
Anastassia:
[13:03] Um, so I think maybe what you're thinking about is the time inconsistency paper.
Keller:
[13:10] Yeah.
Anastassia:
[13:11] But that's not about trading. So that's actually, I guess, going back to the behavioral finance, behavioral economics domain. I've also done things that are just behavioral economics that are not in behavioral finance. And that's a paper that's now forthcoming in management science. It's also an experimental paper that looks at whether people are aware of time inconsistency in others. So there's a long literature that shows that people are time inconsistent, or in other words, present biased, that they want to go to the gym tomorrow when they're standing today, and then tomorrow arrives and they don't feel like going to the gym or eating healthy or doing work or any number of things. And there's also evidence in that literature that people are not aware of this. So if you ask them, do you think you will go to the gym tomorrow and eat healthy and get all this work done? They say, yes, of course I will. So they're not aware of the fact that tomorrow will come and they'll feel like doing something else. And so one paper that I have looks at whether this lack of awareness is driven by maybe an inability to understand time inconsistency.
Anastassia:
[14:14] It's hard to say like, oh, preferences will change. Or is it driven by overconfidence that I think I will go to the gym tomorrow and need help and do all my work. But I'm aware that other humans are not that good at it. and my experiment shows that it's still actually the latter, that when I ask people to predict what others are going to be doing they are spot on, they're really good in terms of, understanding that others are going to want to do less work when the time to do the work comes than ahead of time. But they are overconfident about themselves. And so this is speaking to the sources of overconfidence that it's not a failure, cognitive failure. It's not that we don't understand time inconsistency. It's really that we just think that we are going to be better than others at it.
Brent:
[15:03] So kind of like one of the last uh things you looked at for trading was people valuing human skilled human capital and popular technology is probably a little bit higher than they should could you explain what you were finding there of.
Anastassia:
[15:16] Course so this paper really i think connects well between my research on behavioral finance and then my research on technology investments by firms because it looks at those two things together it looks at when firms are investing in popular technologies, what happens to their valuation? And indeed, what we find is that there's an overvaluation. That's the behavioral angle. So what we do there is we look at the employees of each firm by looking at the resumes of individual people, linking them to companies and saying, oh, this person works at this company at this point in time. What are their skills? And from that, we can tell whether they're a technical person or whether they're an arts person or a communications person, right? Salesperson and so on. So we have 44 skill sets. We use natural language processing to group the hundreds of thousands of individual skills into these skill sets. And then we look at the returns for the firm specifically to the technical skill sets. There are five technical skill sets. IT, mobile networks, software engineering, web development, and data analysis. And what we see in the full sample looking from 2000 onwards, the sample answer in 2016, we see that the companies that have higher shares of employees with these skill sets have higher valuations, but then negative returns in the future.
Anastassia:
[16:38] And we say, why? Why might that be? Is it an overvaluation story? And we do a couple of things to point towards it being this overvaluation story. First, we look at the operations of the firm. We say, well, what happens to the earnings of the firm? What happens to the profitability? And the answer is not much. So those firms hire those technical workers, their valuations go up.
Anastassia:
[16:59] Profitability, earnings don't change much. And so it seems like the investors are excited. They're paying a lot for this company, nothing particularly happens. And so then this ends up being an overpayment and the returns are negative. Second thing we do is we think about the five technical skill sets we have. And we think about when those technical skill sets would have been super popular for investors, right? IT, mobile networks, that would have been early in our sample at the beginning of the 2000s. Software engineering, web development, especially data analysis, that got popular later. in the 2010s. And then we look at when we saw the high valuation and negative returns for each of the skill sets, and we see that it was exactly the time when that skill set was popular. IT, mobile networks, carried high valuation premia, negative returns in the early 2000s, no effect in the 2010s. Data analysis, web development, software engineering, high valuations, negative returns in the 2010s, no effect earlier in the sample. So it really seems to be that whenever each technology is getting popular, that's when investors are excited about it, they value the firm that hires in that technology more highly, and then it ends up reversing in the longer term with the negative returns.
Keller:
[18:19] Is that a function of the technology itself being more accessible over longer terms of time, or just the amount of skilled individuals that are able to be hired? Yeah.
Anastassia:
[18:31] I think it is about the number of individuals that can be hired, and the answer is that there are not that many that are actually skilled. So the demand, if we look at the job postings for those skills, compared to university graduates that are coming out with majors in, say, computer science, the computer science majors didn't change much at all over that time period, versus the job postings skyrocketed. And so what ends up happening is that some of those positions get filled not with, say, undergraduate degrees, four years of great computer science courses, but with somebody who maybe took a Coursera course on data analysis and now books themselves as a data analyst. So I think the quality aspect, the limited supply of graduates, that is definitely a contributing factor. So technology becomes popular, firm wants to hire. Not everybody that says they have a skill in Python is actually going to be a great data scientist. And so that is the constraint. And these technologies are general technologies. So this is something that anybody can write. They have a skill of data analysis. We did not in that paper actually go out and assess the quality of those employees. So I think that there's a big part is just that there's excitement, there's demand, and firms end up hiring lower quality in that skill to fill that demand.
Brent:
[19:43] Do you think firm managers are overreactionary during like popularity booms in new technologies?
Anastassia:
[19:49] Yeah, for sure. And firm managers have their own incentives, right? So, of course, their fiduciary duties to the shareholders, they need to maximize the value of the firm, but they also get some personal benefits from managing a bigger company. It's called empire building. We're managing a cooler company. We're managing a company that invests in technologies that are popular at that time. So I think for sure that is something that is contributing to the results. And we talk about it in the paper a little bit, that empire building can actually be an aspect that also contributes. Not just the investor side, but also the managerial.
Brent:
[20:21] Yeah, that makes sense.
Keller:
[20:22] And looking at like the next big trend right now, we're seeing AI, obviously, every company seems like they're trying to implement it in some way. Can you talk to us a little bit about your recent paper on that,
Keller:
[20:31] in particular, the one on the AI skilled human capital and the impacts that had on firms?
Anastassia:
[20:37] Of course. And it might sound contradictory, but we actually find positive effects from AI. It's not contradictory with the paper that we just talked about, Because when we are researching the effects of AI in that series of papers, we have a couple, one that looks at the effects on the firm, one that looks at the effects on the workforce, and then one that looks at the effects on the risk profile of the firms, the systematic correlation of the firm's stock price with the market.
Anastassia:
[21:05] And there, we really want to see firms that are investing in AI. So this is not somebody who put Python or AI into their skills, and we're not verifying quality. Here, we're really talking about a tiny percentage of the workforce that are really doing AI. These are going to be people that either their job title directly talks about them being an artificial intelligence researcher, something like that, or maybe their job title is a software engineer, but they have a patent in AI, or they have a publication on neural networks. So these are now really high-quality, specific investments that firms are making in AI workers. And to give you a bit of context, the percentage of workers that we identify as being AI workers who are doing AI at the firms is similar to the percentage of workers who are patent-holding inventors. So these are really people that are implementing the technology. This is not just somebody writing whatever they feel like in their skills.
Anastassia:
[22:04] And their effects are similar to those of patent-holding inventors. They have a disproportionate effect on the firm. Why? Because by using technology, they open up a lot of possibilities for that firm. They're bringing in AI technology, they're able to experiment more quickly, they're able to develop new products more quickly, and that helps the firm scale. So the main effect that we see from AI across industries is that firms that hire more AI workers that invest in AI are able to grow faster. Their sales grow by an additional 2% per year compared to firms that have a one-star deviation less investment in AI. That's a pretty significant effect.
Anastassia:
[22:49] But it's not just sales grow. Everything grows. The costs also grow. Costs of goods sold, operating expenses, employment grows. So this is not a story of AI replacing jobs. This is really what seems to be happening, is that firms are using AI, and we're really talking about firms using AI. We're not talking about NVIDIA. We're not talking about Microsoft. We're talking about Caterpillar using computer vision in their smart machinery. Talking about JP Morgan using AI for trading, talking about pharmaceutical companies using AI to come up with drugs quicker. Things like the COVID vaccine that were developed a lot faster than would have been possible without machine learning and AI. Talking about firms leveraging this technology to improve their existing business. And the main effect is that the firms are able to scale. They're able to experiment faster, develop those new products faster, the COVID vaccine example.
Anastassia:
[23:44] And then as a result of that, they're able to scale. They're producing more. They also need more workers, need more product managers to manage all those new products. And so they're able to increase their scale, capture more market share, but not necessarily replacing labor. We also see a lot more innovation. That's how they're able to grow, is not that they're replacing workers, but that they're innovating. More trademarks and more patents, specifically product patents. So we differentiate between product patents, process patents. The effect on product patents is very significant, very large. The effect on process patents is actually smaller and not statistically significant. So really, it seems to be innovation was the main use case of AI in that first decade. And that has allowed firms investing in AI to scale faster than their competitors.
Brent:
[24:35] Yeah. Before we hit product versus process, are you concerned about it?
Brent:
[24:39] Some of these larger companies who have the ability and the resources to like really develop their AI capabilities, taking too large of a market share and capturing too much of the market?
Anastassia:
[24:49] That's a good question. And we definitely do see uneven effects across company size. So if we differentiate, if we split firms up by their initial size in 2010, the largest tercel of firms, middle and smallest, what we see is that the positive effects from AI are greatest in the largest tercelor firm, they're milder in the middle tercelor firms, and they're pretty much zero in the smaller tercelor firms. So small firms are not benefiting even if they're investing in AI as much. Why? Because they just don't have the data resources to really leverage the technology. The more data you have from your own operations, the more of a competitive advantage you have when you do bring in AI. So it's true that it benefits larger firms more. And as a result, at the industry level, industries that have larger investments in artificial intelligence over the 2010s did see a larger increase in concentration, a larger share, say, of the industry sales going to the top player. Are we concerned about that? And the answer is yes, of course. But so far, it has not had a negative effect. When we look at markups, so if we look at what the firms are charging, which is usually the negative outcome from a lack of competition is that the monopolist is able to charge higher markups. We don't see that.
Anastassia:
[26:11] So, so far, firms have used AI to expand, to capture more market share. They have not yet transferred that into higher prices for consumers or higher markups charged, but that might just be a matter of time. It's possible that they're first investing in expanding, and once they've captured enough market share, that's when we're going to see the markup effects. So it's definitely something that I often mention, especially when talking to policy people as a concern to watch out for going forward is that concentration is increasing. And even if we haven't seen the downstream effects of that yet, that's something we should be concerned about for the future.
Brent:
[26:49] Certainly.
Keller:
[26:50] But it's not necessarily that AI is kind of the barrier to entry. It's more the size of data, right? Even though AI is leading to more concentration, it isn't posing more of a barrier to start.
Anastassia:
[27:01] It's just that the usefulness of AI is not going to be the same across all firms, right? The smaller firms can use off-the-shelf type tools, right? So think about something like ChatGPT. Anybody can go and use the general model, the model that's trained on the internet. But if you really want to have a model that's domain-specific, that's trained on your particular problem, then it helps to have a lot of data in-house. And data is oftentimes, especially in finance, it's a big comparative advantage. If everybody's trading on the same data, nobody's going to make much profits, especially if they have the same algorithms. If you have some information others don't, that's when you potentially get to benefit. And AI is just a magnifying glass on that. It really lets you leverage data more effectively. And so those who already had the data advantage are now going to be able to
Anastassia:
[27:53] benefit even more. Yeah.
Brent:
[27:55] And then were you surprised to see process innovation not as impactful? And we'll start there.
Anastassia:
[28:02] We were. Yeah. I think that was probably the most surprising of our findings is that we saw no effects on productivity.
Anastassia:
[28:11] Because oftentimes the image that people have in mind of AI is that it's going to replace labor. It's going to make the firm productive, produce the same amount with fewer employees. But that's not what we saw. We saw produce more with more employees. So we were surprised. A lot of people were surprised. And we don't see productivity effects anywhere. Process patents compared to product patents are not going up significantly. Sales per worker are not going up. Why? Because employment is increasing at the same rate as sales. So when you divide them, you get null effect. Other measures of productivity also not going up. So it's surprising.
Anastassia:
[28:47] It is consistent, though, with what other people have also found using other data, other methods looking at AI. They've also found increases in growth of the firm, no effect on productivity. So that's something that seems to be an empirical fact in that first decade. Again, it could be a matter of time. We did look a little bit at time by leveraging the full decade from 2010 onwards by looking at the firms that invest in EI earlier in the first half of the 2010s, whether they saw some productivity effects in the latter part of the decade. And the answer is still no. So if there is a lag, it's longer than that. So it seems like the first order of fact is not on labor displacement. But there are a couple of caveats there. First, this is looking across the whole economy, looking at all sorts of sectors, all sorts of firms. There are sectors, and I have papers saying audit, where there is a labor displacement effect.
Anastassia:
[29:47] That's first. And second is, even if total labor is not going down, there can still be a change in the labor force. So we also have a paper that looks at the labor composition effects of AI, and we do find significant effects in terms of the types of workers that these firms are hiring. They're hiring additional employees, but those additional employees might be
Anastassia:
[30:09] more skilled, more technical than their workforces used to be.
Keller:
[30:13] Yeah and looking at product innovation do you see regulatory bodies being able to keep pace with the innovation that's coming with firms that have access to it they can keep putting out products rather quickly regulatory bodies historically are not able to move as fast do you see a integration into regulatory bodies to keep pace there how do you see that dynamic changing in the coming decades that's a.
Anastassia:
[30:38] Great question um we haven't to be honest looked at regulation at all yet that's something that i think people are increasingly thinking about so i definitely expect research to be coming out on that whether from our team or other teams um we haven't looked at that ourselves so i think that's an interesting open question.
Brent:
[30:55] Yeah and then do you have hopes for any maybe specific industries being able in the near term to really see a benefit from process innovation.
Anastassia:
[31:08] I mean, I have the paper on audit.
Brent:
[31:10] So that's the industry that we're going to next.
Anastassia:
[31:14] That's the industry where I guess we had priors of AI really being able to replace human labor and assist with the process. Now, would that show up in patents? Not necessarily, but it does show up in quality. So what we see there with audit is that when an audit firm invests in AI, the quality of the audit goes up, meaning that there are fewer restatements or fewer SEC investigations, fewer negative outcomes associated with the financial statements that are being audited by that auditor. And when we speak to audit partners to really understand the process of how is this happening, what they say is, well, auditing is fundamentally sampling. We're not going to be able to check everything at the firm and everything in the statements, but we can sample and we can extrapolate and say, hey, this is not adding up, so probably the larger scale one either. So where does machine learning and AI come in? Well, it helps you sample better because it's really tailor-made for predictions, for anomaly detection, for identification of risk areas. So AI can flag things, can be like, hey, this looks unusual. Look here. So that the auditor is not looking sampling at random, but really focusing on the high-risk areas. What that means is that they're more likely to identify issues, and so there are going to be fewer restatements later on.
Anastassia:
[32:40] What it also means potentially is that, well, if each auditor is now looking
Anastassia:
[32:45] at more high-value targets, maybe they need to sample less. And maybe each audit employee can work on more projects, and maybe at the end you need fewer employees. And that's what we see, but with a bit of a lag. Three years, four years later, we see a reduction in the audit personnel at the audit firm that invests in AI. And specifically in the audit personnel that would be doing the day-to-day there is not an effect in audit partners those are doing soft skills that are not being AI replaced but audit associates we do see some decline.
Brent:
[33:23] Yeah I want to.
Keller:
[34:00] Yeah. So with the reduction of associates, I guess two questions on that. One, at the firm level, there's a reduction, but at the industry level, we see a reduction of that caliber of labor. And then secondly, the paper mentioned that a lot of these AI trained auditors tend to be more male. Is that something also that has historically just within the audit space been the case, or is that new within AI?
Anastassia:
[34:25] So first question on the industry level, the audit industry is not huge. I'm really going to read the big four, have a lot of the business, and then there are other midsize players. And that's whom we looked at. We looked at the top 30 or so audit firms. And we see a reduction associated with the firms that invest more in AI. They see a larger reduction in the personnel. Now, if I remember correctly, the total size of the audit personnel is also not going up, probably a slight reduction in total. But really looking at the differences between firms is what allows us to attribute that to AI, saying, well, this firm invests in AI in this year,
Anastassia:
[35:05] and then three years later is when we see the personnel reduction. So it's a factor that I think helps explain some of the decline in the audit personnel of those firms. Second question in terms of the composition of the AI workers. Now, AI workers are, again, a very small percentage of the firm's total workforce. This is really the people that are going to be either building internal systems or leveraging external system. And the way it works in audit is the big four typically have their own. So they develop their own capabilities. So that's what their AI workers are doing. For other firms, they're smaller firms, right? So even if, say, 10% of your workforce were AI workers, which is unimaginably high, typically it's like 0.1% of your workforce is an audit worker, you would have fewer people if you're a smaller firm.
Anastassia:
[35:53] And so for those, oftentimes the pool resources, the accounting kind of association together has some tools that they use. But internally, each audit firm still needs to have those AI trained specialists to bring in the system to really adapt it through their needs, to know how to use it, and to train others. So what the AI personnel at audit firms do is develop, adapt the systems, plus train the audit partners, train the auditors on how to best leverage the systems. But there's still a pretty tiny percentage of the overall workforce. And that tiny percentage of the overall workforce across industries does tend to be of a certain demographic type, which is typically the demographic type of machine learning graduates and, US universities.
Brent:
[36:41] And then do you think there's other industries that might have similar processes to auditing where I did a bunch of research on the healthcare industry and where like doctors would like review patients' data and like if AI was flagging certain discrepancies or potential areas to highlight, they could better treat these patients and stuff like that. Are you hopeful for certain other industries to kind of pick up the ball and like use some of these process innovations to become more efficient.
Anastassia:
[37:07] For sure. Yeah, healthcare is a great example. And healthcare industry has been using technology, the AI technology for a while, right? And drug development process was one of the earliest applications of really commercial, large kind of effects of AI. Diagnostics definitely is something that is promising direction. Potentially things like the legal profession have a lot of things that can also be automated with AI. And in fact, at the AI for Good Foundation, which maybe we'll get a chance to talk to about a little bit later, one of the things we've been discussing with the Ukrainian government was using AI tools to write some of the responses, explanations. When people are writing in asking the National Anti-Corruption Prevention Bureau, how do we fill out this declaration form? It's so complicated. Having an automated response system for something like that is a natural application for AI. right and in the u.s i think the tax system.
Keller:
[38:06] Quite a.
Anastassia:
[38:09] Bit too um so definitely there are there are other industries other applications um where i think it can be a very helpful tool to augment the skilled human labor yeah.
Keller:
[38:22] I think we're going to dive right into the ai for good foundation so could you just explain kind of what it is um and then also explain how it's being used for the war in Ukraine, and some of the things you mentioned, obviously, for the declaration, some of the other things that you guys are looking at and using it for.
Anastassia:
[38:37] Of course. So the Eye for Good Foundation has actually been around for almost a decade, and it started in 2015 with the goal of showcasing how those technologies, machine learning, AI, can be applied for socially beneficial ends, because most of the spending on technology during in the 2010s was really on things like optimizing advertisement or optimizing product positioning on websites, right? And sure, that can help the firms scale up, that can help them sell more, hire more workers, but it's not necessarily as socially beneficial as something like, optimizing the allocation of crops for a greater crop yield to be able to battle hunger, or medical diagnostics that you mentioned, or tracking indicators like climate change to really understand where we are there and what we can do and how and how things can um can help so the eye for good foundation started um.
Anastassia:
[39:33] Within the framework of the United Nations Sustainable Development Goals, showcasing for each of the SDGs, examples of how this technology can be helpful. And the idea is, as a nonprofit, you're not going to be able to really implement everything, solve everything, but you can show how it can be done. You can pilot, you can sort of build the framework, and then other players, your partners, for-profit entities can really scale it up. So as an example, for the food security, there were partnerships with IBM, with agricultural firms, running workshops for academics, soliciting the best ideas, the best research on applying AI to this domain to optimizing crop yields to then kind of incentivize research on that versus just research on things that are going to be helpful to say Netflix, right, with the Netflix challenge some years ago.
Anastassia:
[40:33] So that was how it started. It's had quite a few different programs sort of falling within the different SDGs, and roughly I would group them into three buckets. One is policy. So things like digitization policies, AI policies for federal governments, for local governments, helping communities at different scales at a country level or the city level use technology better and have better technology policy. With AI sort of being the thing that most people come to you asking for policy on, but really it's much more than AI, right? AI for Good Foundation wrote the digitalization AI technology policy for Ethiopia. Ethiopia wanted an AI policy, but really they needed a lot more before getting to AI. You really want to make sure that the infrastructure is there to support that. So call it AI, but really everything that is a sort of ingredient to get to AI needs to be sorted out as well. So that's the first vertical is the policy. And a lot of our work also on the war in Ukraine falls in that as well with sanctions policies, things like technology sanctions, that's largely those recommendations end up written by our group because we understand both the economics and the technology aspects. So that's the first pillar. The second one is innovation.
Anastassia:
[41:55] So showcasing how technologies can be used in industries that have social benefits, innovating in terms of university industry partnerships encouraging innovation that way there is um there was an incubator the african foundation ran um kind of sdg ventures so again ventures that are going to support the united nations social um sustainable development goals and so on that's sort of the second pillar and the third is community resilience so this is where a.
Anastassia:
[42:27] We would maybe term it humanitarian aid might fall under, but really technology driven and done in a way that doesn't replace what is happening in the community, but that augments it and that allows the community to thrive and help itself. So instead of coming in, say, and just bringing medicine to somewhere like Ukraine for a few months and disrupting the local pharmacies, we would ensure that we provide information on the local pharmacy so people
Anastassia:
[42:57] can go there if they can afford it. But those that can't, that's where we would bring in the aid. So we have platforms that basically help people find resources and that help people request resources if they're in need with automatic validation of those needs. So we say, yes, this is an elderly person. This is somebody who is poor. And they actually have a need and then matches them to the volunteer organizations that are able to provide those goods. That's one of the technology platforms that we've developed that is operating in Ukraine. But again, we see it as sort of a prototype. We often would run things in one place, and then that's something that could be used in other places and that we would love to see scaled elsewhere.
Anastassia:
[43:37] The war, the full-scale war that Russia has escalated in Ukraine is definitely a setting that both has a lot of need. So that's why we step in significantly, but also a setting that is really amenable to experimenting, to kind of piloting some of those programs because of the war setting, the government is very easy to work with. It's quite decentralized. It's easy to come in, offer help, and be heard, be kind of accepted, to try things out. It's a very innovation-heavy place as well. Just a lot of technically educated workforce, a lot of startups, a lot of innovation, even during the war. So, again, something that is an ideal ecosystem as a pilot for programs that really could be used for disaster relief and for other needs all over the world.
Brent:
[44:37] Are you able to see measurable impact to date on some of these projects?
Anastassia:
[44:42] Yeah. So, a lot of them are technology projects or kind of direct deliveries projects. So, the latter, of course, is sort of linearly scaled. So, we provide medicine for the front lines. We run schools for the children. So, again, the more children we can serve, the better. For the technology programs as well. So, one of the programs that I mentioned is this matching program for AIDS. So, we see that there are thousands of people that are using it on a regular basis. And another program that we run for documenting the war, documenting experiences and using technology like AI to identify potential fakes, propaganda, and also identify potential war crimes that we collaborate with the Office of the Prosecutor General in Ukraine and the International Criminal Court. There have already been dozens of investigations with the evidence that we've collected on this war diarist platform. It's called svidag.org. And the goal is a story from every Ukrainian. The goal is that after World War II, we took decades to really reconstruct people's experiences. And oftentimes, what ended up having the largest effect with things like Anne Frank's diary, which is ordinary people's stories.
Anastassia:
[45:01] Now, we're in an age with technology, with digitization everywhere, with AI. We can be doing this in real time. There is no need to wait to find out really
Anastassia:
[45:10] everything that happened during a war for 20 years. It's important to get those stories now. It's important to get to the people, to get them to understand that everybody has a story worth sharing, and you never know who's going to be the one whose story ended up being the analog of Anne Frank from World War II. So with that platform we have again thousands of people who've shared their stories some people who've really done an amazing job using it as a diary writing every day wow um this each one of them could be a book um and we have western journalists visiting the platform we have ukrainian journalists visiting the platform thing that gets about a hundred thousand views per month so it's a pretty active platform yeah.
Keller:
[45:55] That's amazing and have you seen other like larger media companies coming in and using that platform for their stories or is it more small independent journalists.
Anastassia:
[46:03] We definitely have large media journalists registered on there as well um yeah i've seen the list most of the major papers have journalists registered there as well So kind of a good mix. And of course, the place where probably the most potential impact and yet kind of the longest lag to get to the impact from our work is the policy aspect. With the policy, our job is to say what should be done. And then it's the government's job, various governments, to make sure that it does get done. And that's something that requires a lot of repetition. We've been saying, as an example, with the war that Russia's waging in Ukraine, we've been saying for a long time that technology companies should have disconnected from Russia a whole lot more. It's not about just not selling a new iPhone to Russia. It's not selling extra iPhones to Kazakhstan that end up going to Russia. It's about disabling technology that is in Russia. It could have been very disruptive to just disable all Western technology in March 2022. Where in the end of 2024, it still hasn't been done fully. It's a very gradual process in terms of policy implementation. So again, we write the advice and then we highlight it. We write op-ed pieces,
Anastassia:
[47:26] highlighting it, reiterating it, and gradually it gets implemented.
Brent:
[47:30] Yeah. For the sanctions that have been, like, placed on Russia, have they been effective?
Anastassia:
[47:37] Yes. Otherwise, we would not hear nearly as much complaints from the Russians, including some of the Russian opposition about, you know, poor ordinary Russians who suffer under sanctions. Absolutely, they are irritating to Russians at all levels. So therefore, they must be effective. Are they as effective as our recommendations would have been had they been fully implemented? No. So, again, natural things like third parties, we have recommendations about how to shut that down. We could do things like for countries that trade actively with Russia, that are basically re-exporting those goods to Russia, we could fix the export to those countries at 2021 levels. Than countries like Kazakhstan, Turkey, everybody that ends up selling to Russia could decide, do they want to have the goods that they were getting for their own people in 2021 or do they want to export to Russia? Presumably they would end up exporting a whole lot less if we fixed trade with them at the levels from 2021. So examples like that are things that we could do additionally to close down those loopholes that exist right now.
Brent:
[48:48] So for those neighboring countries that are friendly with Russia, we saw pretty large increases in purchases of goods that would then be...
Anastassia:
[48:57] Yeah, and why would it be? Why would it be that a small country that trades actively with Russia would double its purchases of Western technology?
Keller:
[49:05] Yeah. And what would economic victory look like from Ukraine? And I guess it's kind of like some of what we're talking about, like differing from the Western perspective versus the perspective of Russia or the countries that are neighboring and are seeing these economic benefits from the war.
Anastassia:
[50:20] I'm not sure what the question is.
Keller:
[50:22] Like beyond sanctions, what are some of the economic tools that you think we could be used or should be used in terms of whether it's policy based or not
Keller:
[50:30] to further the war or like further end the war?
Anastassia:
[50:33] Got it. Yeah. Of course, the biggest one is the frozen Russian assets, which should be used for fixing the damage that Russia has has done in Ukraine. And there's movement in that direction. Right. So there's movement in terms of using the interest earned on those assets. but still not the principle. But I think we should be very clear in terms of, this is the registry of damage. These are the assets, and they're going to be used proportionally to the damage. There has to be an incentive for Russia to minimize that damage. So that's still a lever that we have not fully used. There is also the designation of Russia as a state sponsor of terrorism, which would make it costlier for other countries and for companies to deal with it even in an indirect way. So that's something that would in itself close a lot of the loopholes that exist with just the current sanctions. So I would say those are the two main things that have not been used from the arsenal in addition to, of course, improving the implementation of the sanctions that have been imposed.
Keller:
[51:41] And does that designation, like what body gives that designation?
Brent:
[51:47] For the state-sponsored terrorism?
Anastassia:
[51:48] Yeah i think congress so it's okay.
Brent:
[51:50] But from the u.s perspective okay yeah and then looking forward like when the war ends like on the economic side like what's a win for ukraine not kind of like this kind of like land and that's a different like i think topic altogether but kind of i was curious as to i think there's conversations in the west what would be a win for the west via ukraine versus what would actually be a win for ukraine itself and their citizens.
Anastassia:
[52:27] So for ukraine by far the most important thing is to neutralize the threat from russia because as long as that's not neutralized right that affects everything including the economic development so So there is absolutely, I have no question that Ukraine would be an extremely attractive investment opportunity for reconstruction, for business, already even now during the war, if there were no threat from Russia. The only thing that makes people hesitant to invest in Ukraine now, and that would still be there if Russia is there, not neutralized, is the fact that Russia can shoot things down. Now, even that, I think, is perhaps slightly overblown, or it endogenously depends on how much we invest, right? So if more Western enterprises were investing in Ukraine, it would be safer. Right. Because let's face it, there would be an incentive to provide Ukraine better air defense and potentially also more fear on Russia's part in terms of escalating the conflict by hitting Western companies. So that's already the case. And the more we invest, the safer it becomes.
Anastassia:
[53:35] But in terms of getting that process starting, it's a whole lot easier if we can be certain that Russia is not going to blow it up in two years, five years, 10 years, however long from now. So that's kind of the main thing for Ukraine is just, and this is why there's a discussion, right, about not compromising with Russia and not negotiating with Russia is that any type of agreement, Russia would break. It's broken. And we've written an op-ed about it in LA Times where we just cataloged every promise Russia made and it was broken. Russia does not keep promises. We know that.
Anastassia:
[54:11] The reason that the discussion often focuses on land and getting back to the borders of Ukraine that were internationally agreed upon is that to get to that point, to liberate all of that territory, means that Ukraine would be sufficiently strong to push Russia back that far. It would not be nearly as worried about Russia invading again. If Russia is defeated militarily on the land of Ukraine, it's likely that Russia would, much more likely at least, that Russia would end up having some kind of reckoning over like, well, that was a bad idea. Potentially, then some type of leadership change or attitude change. Without that, there wouldn't be. And we've seen that after 2014, of course, after Crimea, after the occupation of the easternmost part of Ukraine. That did not discourage Russia from attacking further. So that's the main thing that for Ukraine, even from an economic perspective, is just important to get to a state that they can really invest in, in reconstruction, in development, without having a very likely fear of it being destroyed again.
Keller:
[55:21] Beyond the AI for Good and the resources that you mentioned for that, are there any other resources you would point students to in the context of the Ukraine war to be aware of what's going on and keep their education on, whether it's the economics, the policies, on the movements that are happening?
Anastassia:
[55:37] Yeah so it depends on the interest of course on the economics um the economists for ukraine which is a part of ai for good foundation so this is our collective of economists so we have i think over 400 economists that are in our collective so it's really a wealth of expertise of information there um with our leadership team and and then this entire um collective of economists um who come in and chip in on various questions. So that's something that I would highly recommend for the general economic approach. We work on sanctions. We work on also reconstruction. We work on economic planning. So we write also kind of policy suggestions on what the Ukrainian government should be doing to possession itself more strongly, what the reconstruction might look like, what areas to prioritize, and what the West should be doing in terms of sanctions. We work on sanctions very closely with the Stanford folks, Michael McFall, so the Yerimov McFall International Working Group on Sanctions Against Russia.
Anastassia:
[56:40] Our leadership team comprises a large share of the experts in that group, but there are also other very talented experts in that group, including, for example, Kiev School of Economics. They're doing a lot of work on economic analysis as well, so definitely recommend following both the Stanford Sanctions Group and the KSC on economic questions. In terms of supporting Ukraine, I mean, honestly, as I mentioned, there is no economic development without security. So by far the most effective lever for everything, including the economy, is just to support the defenders. And, of course, that mostly gets done by governments, right? They're the ones that provide weapons, what individuals can do. So what we do is... Medical aid, right? Those defenders, for them to survive, they need tourniquet, they need medical aid. That's something that's always just effective to do with as little intermediation as possible.
Anastassia:
[57:39] Groups that are, I guess, less effective, more experienced in Ukraine are the large international organizations. So things like the Red Cross, International Rescue Committee, it's just their fundraising, even though they may at a certain point in time fundraise for Ukraine, they're not obligated to IRC to use those funds or so they perceive it. And so you never actually see the money getting into Ukraine with some of those larger organizations. So I definitely always suggest working directly with Ukrainian groups or groups that are working sort of programmatically. For example, we at AI for Good, at Economist for Ukraine, if that money was raised for a certain program we will use it on that program so we've never had it be the case that we would you know raise money for a ukraine program and then spend it on our um climate change tracker or vice versa.
Brent:
[58:35] Yeah i had one other question i'll probably move this back before that one but, when talking about the security of ukraine do you think a compromise could be like not joining nato to not further provoke russia but then still like being its own like independent like, taking like western development and support maybe financially.
Anastassia:
[58:59] I mean that's where we are right now yes and that's not security so you you would like like.
Brent:
[59:06] To see them join nato or.
Anastassia:
[59:08] Um i would absolutely like them to see them during nato i'm not 100 sure that even that would be enough So NATO right now, the credibility is not being increased by letting Russia occupy any part of Ukraine, because if you're sitting in a Baltic state thinking, well, reading Article 5, the text is not exactly much more obligatory than the Budapest Memorandum was, which was Ukraine's agreement with the U.S., U.K., and Russia. So really, NATO is just reputation at this point. So the only reason that NATO might protect much smaller members than Ukraine is the reputation damage. But that's it. There is nothing beyond that. So I would say that's something. That would be something Ukraine would get by joining NATO. Even that is not everything. And really the only security guarantee for Ukraine would be sufficiently defeated Russia.
Brent:
[1:00:11] Yeah, that makes sense. And then kind of stepping back out into more of the academic work as a whole,
Brent:
[1:00:19] where do you see your research going? Like, what are some of your bigger questions that you want to be asking or you think will be most important going forward?
Anastassia:
[1:00:27] Definitely on the technology side, that's where most of my active work is right now. Looking at, I mean, obviously there are quick pace developments in the AI space, and we have ever-increasing research looking at that, looking at particular aspects like finance and how well new technology, like large language models, can do in terms of capturing some of the demographic differences and investment strategies, how useful they might be potentially as tools for investing.
Anastassia:
[1:00:57] Also research on the firm's use of AI, looking at things like risks for the firm right now, mostly in the return space, but also thinking potentially on things like regulation risks and other types of risks from AI. And of course, we have work also on the labor composition. So that's something that is active as well. I have a paper looking specifically at banking and what happens to older employees as banks kind of tech up and how the skills of the employees affect the impact of those employees experience and the complementarity, for example, between the age of the employee, which makes them more vulnerable typically to displacement by technology, and then the vintage of the skills. So how modern are their skills of that older employee has kept up to date and still has modern technical skills, they're at less risk of being displaced by the bank investing in technology than if they're an older employee with older vintage skills. So definitely this area of.
Anastassia:
[1:01:03] Workforce composition changes of the skill the upskilling of the workforce that is happening with technology investments is an area that where my co-authors and i are very actively looking at yeah.
Keller:
[1:01:16] And then as we part do you have any advice to students whether that be broadly or students interested in economics and finance of how they should go through their studies when they're looking about entering the workforce especially in
Keller:
[1:01:26] the context of ai and all this growth how they can best position themselves.
Anastassia:
[1:01:30] Good question so for academia definitely coming back to i guess the question that you asked earlier about my own background definitely technical skills for math is sort of a must just for anybody thinking about a phd in economics and finance a strong foundation and math is necessary and thinking kind of outside of academia so for the for the workforce and here i'm really drawing on the research assistants that passed through working with me and and what i was looking for when i was hiring undergraduate research assistants and that's again kind of really fundamental technical knowledge so i don't think it's nearly as important to be kind of using the latest whatever is considered cool tool as really understanding the fundamentals of how things work of keeping right your sort of your code be object-oriented, clear, clean.
Anastassia:
[1:02:26] Modular, things that are sort of adaptable, understanding, right, of constraints. And I think oftentimes actually really valuable to work in a more constrained space than to automatically be like, hey, I have unlimited computing resources, so I'm just going to like not even think about optimization. Well, that's still going to cost you more. It's still important to think about optimizing your code, optimizing how you're working with data, not, right, just using resources without acknowledging it.
Anastassia:
[1:02:56] And second, and this is something that is challenging because, of course, as an undergraduate, the typical curriculum is you learn a method and then you have a problem set, you apply that method. You learn the new method, you have a problem set, you apply that method. So it's really method driven, but that's not the right approach in either academia or industry for that matter. The right approach really is you have a problem. And then you find the method in your arsenal of methods that you've learned that would best suit that problem. And that's something that I've noticed be challenging to students, something that I always push them to do is just articulate the problem. And the problem is not use NLP methods such and such. The problem is something that does not specify a specific method, does not rely on a specific method. It's something that you want to get done. And then you think about. What is the data you need? What is the algorithm that you need? What is the best way to solve that problem? So that's something that I would definitely encourage everyone just to keep at the back of their mind that we need to be problem first and then find the right method.
Brent:
[1:03:02] Perfect. Well, thank you so much for coming on today.
Anastassia:
[1:03:05] Thank you for having me. It was a pleasure.
Keller:
[1:03:07] Thank you.