University of Cambridge > Talks.cam > CERF and CF Events > Measuring the Informativeness of Audit Reports: A Machine Learning Approach

Measuring the Informativeness of Audit Reports: A Machine Learning Approach

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Cerf Admin.

This paper studies the informational value of audit reports using computational linguistic tools based on FinBERT, a cutting-edge large language model (LLM) designed for financial texts. We analyze the topics within audit reports and classify them into 41 labels, organized into standard and expanded components. The standard components contain boilerplate language on audit scope, opinion, and basis for opinion. In contrast, the expanded components contain explanatory language, audit matters, and discussions of audit procedures that reflect auditor judgment. Contrary to the perception that audit reports lack informational value, we find that changes from the addition of new sentences in the expanded components carry strong implications for the client firms’ future performance and misstatement risk. Firms with larger changes in the expanded components exhibit poorer future returns, less persistent operating performance, and a higher likelihood of future financial restatements. These changes trigger investor trading, reducing bid-ask spreads around the audit report releases. Both regulatory influences and litigation pressures drive these changes, underscoring the role of both public and private oversight in enhancing audit report informativeness.

This talk is part of the CERF and CF Events series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity