Abstract
Artificial intelligence (AI) has caught pace with — and in some contexts even surpassed — humans in the ability to make predictions from data, purporting to improve decision-making. However, in cases where humans are still responsible for the final decision, biases in probabilistic reasoning can render even informative AI predictions detrimental to decision-making outcomes. Using a randomised experiment with loan underwriters, we show that the provision of coarsened AI signals at the right thresholds can improve overall decision-making in spite of information loss.