Interested in having your article published in the Financial Analysts Journal? Submit your article here.
Abstract
This paper analyzes more than 10,000 large language model (LLM) responses to finance-related prompts and identifies tradeoffs that can guide both academics and practitioners in using these models for finance applications. Using novel data, we show that some models and methods of interactions are appropriate for users who place a high value on accuracy (i.e., correctness), while others are better for generating responses that are similar to human expert-written text. We identify which finance tasks are associated with higher levels of LLM accuracy and show that the appropriate use of LLMs is task-specific, not job-specific.