Main Article Content

Abstract

Purpose: A significant paradox undercuts artificial intelligence's promise in strategic marketing: while 92% of organizations already use AI-generated insights, 74% of executives distrust them for crucial decisions.


Research Design and Methodology: This study addresses the credibility dilemma by conducting a groundbreaking blind test with 200 Chief Marketing Officers from Fortune 500 companies, analyzing identical business challenges—half answered by premier AI platforms (GPT-4 and custom LLMs), and half by experienced human analysts.


Findings and Discussion: The technique found an unexpected discrepancy: whereas NLP assessment indicated AI matched or exceeded human report quality in 82% of cases, displaying higher predictive accuracy (+14%) and data comprehensiveness, executives rejected 68% of algorithmically generated insights. A multivariate study identified explanatory inadequacies as the crucial factor: AI's inability to communicate why patterns mattered (causal reasoning), base discoveries in operational realities (contextual framing), and structure insights coherently (narrative flow) accounted for 53% of the trust gap. This "analytics without understanding" dilemma was evident when CMOs ignored an AI report accurately predicting telecom churn because it overlooked how back-to-school tuition payments stretched household budgets—the explanation that made the helpful finding. The study proposes a hybrid approach that adds human-authored "why explanations" (about 47 words) to AI outputs, increasing adoption intent by 40% while maintaining 60% efficiency improvements.


Implications: These findings suggest viewing algorithm aversion as a fundamental epistemic reconciliation challenge—one where narrative intelligence links computational power and human judgment. As AI affects strategic decision-making, this study gives a trust calibration plan for maximizing its potential while maintaining interpretative depth.

Keywords

AI-generated reports algorithm aversion analytic narratives trust calibration explanatory deficit epistemic reconciliation

Article Details

How to Cite
Dzreke, S., & Dzreke, S. E. . (2025). The Credibility Gap: Why 68% of Marketers Reject Superior AI Reports (200-CMO Blind Test). Advances in Business & Industrial Marketing Research, 3(3), 177–188. https://doi.org/10.60079/abim.v3i3.625

References

  1. Altimeter. (2023). The state of digital maturity: 2023 benchmark report. https://altimetergroup.com/digital-maturity-2023
  2. Chen, L., Kumar, V., & Zhang, Z. (2023). When algorithms outperform intuition: The cost of ignoring predictive analytics in marketing. Journal of Marketing Research. Advance online publication. https://doi.org/10.1177/00222437231168720
  3. Chen, L., Syam, N., & Patel, P. C. (2023). Algorithmic aversion in C-suite decision-making: Evidence from Fortune 500 firms. Journal of Marketing Research, 60(2), 201–219. https://doi.org/10.1177/00222437221125689
  4. Cheng, Z., & Jiang, H. (2023). Cultural intelligence in algorithmic marketing: Bridging the semantic gap in global branding. Journal of International Marketing, 31(2), 88–105. https://doi.org/10.1177/1069031X221145672
  5. CMO Council. (2023). The analytics credibility crisis: Why marketers reject their own data. https://cmocouncil.org/ai-rejection-study
  6. Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.
  7. Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to work. MIT Press.
  8. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
  9. Forrester. (2022). Marketing analytics technology landscape, Q4 2022. Forrester Research.
  10. International Data Corporation. (2023). Worldwide artificial intelligence spending guide. IDC Doc. #US50564023.
  11. Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  12. Jain, M. (2022). The rhetoric of machine-generated reports: Tone, trust, and strategic adoption. Journal of Business Communication, 59(3), 287–310. https://doi.org/10.1177/00219436221082859
  13. Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse to algorithms? A comprehensive literature review on algorithm aversion. European Journal of Information Systems, 29(6), 1–25. https://doi.org/10.1080/0960085X.2020.1773132
  14. International Data Corporation. (2023). Worldwide AI and analytics spending guide. IDC Doc. #US50564023.
  15. Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  16. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335
  17. Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning. University of Illinois Press.
  18. Rosenthal, R., & Rosnow, R. L. (2008). Essentials of behavioral research: Methods and data analysis (3rd ed.). McGraw-Hill.
  19. Shrestha, Y. R., Ben-Menahem, S. M., & Von Krogh, G. (2019). Organizational decision-making structures in the age of artificial intelligence. California Management Review, 61(4), 66–83. https://doi.org/10.1177/0008125619862257
  20. Suresh, H., Guttag, J. V., Horvitz, E., & Kolter, J. Z. (2021). Beyond fairness metrics: Roadblocks and opportunities for real-world algorithmic auditing. FAT* '21 Proceedings, 560–575. https://doi.org/10.1145/3442188.3445921
  21. Weick, K. E. (1995). Sensemaking in organizations. Sage Publications.
  22. Workshop on Human Interpretability in Machine Learning (WHI 2023). (2023, June 23–24). Proceedings of the 2nd Workshop on Human Interpretability in Machine Learning. Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, United States.