Uncategorized

Commonsense Reasoning and Explainable Artificial Intelligence Using Large Language Models



  • Álvez, J., Lucio, P., Rigau, G.: Adimen-SUMO: Reengineering an ontology for first-order reasoning. Int. J. Semant. Web Int. Syst. (IJSWIS) 8(4), 80–116 (2012), https://ideas.repec.org/a/igg/jswis0/v8y2012i4p80-116.html

  • Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    CrossRef 

    Google Scholar
     

  • Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 1877–1901 (2020). https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf

  • Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021). https://doi.org/10.1613/jair.1.12228

    CrossRef 
    MathSciNet 

    Google Scholar
     

  • Chen, M., D’arcy, M., Liu, A., Fernandez, J., Downey, D.: CODAH: An adversarially-authored question answering dataset for common sense. In: Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP, pp. 63–69 (2019). https://www.jaredfern.com/publication/codah/

  • Clark, P., et al.: Think you have solved question answering? Try ARC, the AI2 reasoning challenge. CoRR – Computing Research Repository abs/1803.05457, Cornell University Library (2018), https://arxiv.org/abs/1803.05457

  • Davis, E.: Logical formalizations of commonsense reasoning: a survey. J. Artif. Intell. Res. 59, 651–723 (2017). https://doi.org/10.1613/jair.5339

    CrossRef 
    MathSciNet 

    Google Scholar
     

  • Davis, E., Marcus, G.: Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM 58(9), 92–103 (2015). https://doi.org/10.1145/2701413

    CrossRef 

    Google Scholar
     

  • Deng, J., Lin, Y.: The benefits and challenges of ChatGPT: An overview. Front. Comput. Intell. Syst. 2(2), 81–83 (2023). https://doi.org/10.54097/fcis.v2i2.4465

  • Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. https://arxiv.org/pdf/1810.04805

  • Du, L., Ding, X., Xiong, K., Liu, T., Qin, B.: e-CARE: a new dataset for exploring explainable causal reasoning. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 432–446. Association for Computational Linguistics (2022). https://aclanthology.org/2022.acl-long.33/

  • Forbus, K.D.: Qualitative process theory. Artif. Intell. 24(1–3), 85–168 (1984). https://doi.org/10.1016/0004-3702(84)90038-9

    CrossRef 

    Google Scholar
     

  • Furbach, U., Hölldobler, S., Ragni, M., Schon, C., Stolzenburg, F.: Cognitive reasoning: A personal view. KI 33(3), 209–217 (2019). https://link.springer.com/article/10.1007/s13218-019-00603-3

  • d’Avila Garcez, A.S., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: A sound approach. Artificial Intelligence 125(1–2), 155–207 (2001). https://doi.org/10.1016/S0004-3702(00)00077-1

  • d’Avila Garcez, A., Lamb, L., Gabbay, D.: Neural-Symbolic Cognitive Reasoning. Springer, Berlin, Heidelberg (2009). https://doi.org/10.1007/978-3-540-73246-4

    CrossRef 

    Google Scholar
     

  • Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    CrossRef 

    Google Scholar
     

  • Huang, L., Le Bras, R., Bhagavatula, C., Choi, Y.: Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2391–2401. Association for Computational Linguistics (2019). https://aclanthology.org/D19-1243/

  • Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.R.: Unmasking clever hans predictors and assessing what machines really learn. Nature Communications 10(1), 1096 (2019). https://www.nature.com/articles/s41467-019-08987-4

  • Lenat, D.B.: CYC: a large-scale investment in knowledge infrastructure. Commun. ACM 38(11), 33–38 (1995). https://doi.org/10.1145/219717.219745

    CrossRef 

    Google Scholar
     

  • Lewis, M., et al.: BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension (2019). arXiv:1910.13461

  • Liu, Hugo, Singh, Push: Commonsense Reasoning in and Over Natural Language. In: Negoita, Mircea Gh., Howlett, Robert J.., Jain, Lakhmi C.. (eds.) KES 2004. LNCS (LNAI), vol. 3215, pp. 293–306. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30134-9_40

    CrossRef 

    Google Scholar
     

  • Liu, Y., et al: RoBERTa: A robustly optimized BERT pretraining approach. https://arxiv.org/pdf/1907.11692

  • McCarthy, J.: Programs with common sense (1959). https://www.cs.rit.edu/~rlaz/is2014/files/McCarthyProgramsWithCommonSense.pdf

  • McCarthy, J.: Artificial intelligence, logic and formalizing common sense. In: Thomason, R.H. (ed.) Philosophical Logic and Artificial Intelligence, pp. 161–190. Springer, Netherlands, Dordrecht (1989). https://doi.org/10.1007/978-94-009-2448-2_6

  • Mostafazadeh, N., Roth, M., Louis, A., Chambers, N., Allen, J.: LSDSem 2017 shared task: The Story Cloze Test. In: Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 46–51 (2017). https://aclanthology.org/W17-0906.pdf

  • Onoe, Y., Zhang, M.J.Q., Choi, E., Durrett, G.: CREAK: A dataset for commonsense reasoning over entity knowledge. https://arxiv.org/pdf/2109.01653

  • Pal, A., Umapathi, L.K., Sankarasubbu, M.: MedMCQA: A large-scale multi-subject multi-choice dataset for medical domain question answering. ACM Conference on Health (2022). https://arxiv.org/pdf/2203.14371

  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. Tech. rep., OpenAI (2019). https://paperswithcode.com/paper/language-models-are-unsupervised-multitask

  • Roemmele, M., Bejan, C.A., Gordon, A.S.: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In: AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, pp. 90–95 (2011), https://aaai.org/papers/02418-choice-of-plausible-alternatives-an-evaluation-of-commonsense-causal-reasoning/

  • Sap, M., et al.: ATOMIC: an atlas of machine commonsense for if-then reasoning. Proc. AAAI Conf. Artif. Intell. 33(01), 3027–3035 (2019). https://doi.org/10.1609/aaai.v33i01.33013027

    CrossRef 

    Google Scholar
     

  • Sap, M., Rashkin, H., Chen, D., LeBras, R., Choi, Y.: Social IQa: Commonsense reasoning about social interactions. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4463–4473. Association for Computational Linguistics (2019). https://aclanthology.org/D19-1454/

  • Siebert, S., Schon, C., Stolzenburg, F.: Commonsense reasoning using theorem proving and machine learning. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) Machine Learning and Knowledge Extraction – 3rd IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2019, pp. 395–413. LNCS 11713, Springer Nature Switzerland, Canterbury, UK (2019). https://doi.org/10.1007/978-3-030-29726-8_25

  • Singh, S., et al.: COM2SENSE: A commonsense reasoning benchmark with complementary sentences. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 883–898. Association for Computational Linguistics (2021). https://aclanthology.org/2021.findings-acl.78

  • Speer, R., Chin, J., Havasi, C.: ConceptNet 5.5: An open multilingual graph of general knowledge. Proceedings of the AAAI Conference on Artificial Intelligence 31(1) (2017). https://doi.org/10.1609/aaai.v31i1.11164

  • Talmor, A., Herzig, J., Lourie, N., Berant, J.: CommonsenseQA: A question answering challenge targeting commonsense knowledge. https://arxiv.org/pdf/1811.00937

  • Taylor, J.E.T., Taylor, G.W.: Artificial cognition: how experimental psychology can help generate explainable artificial intelligence. Psychon. Bull. Rev. 28(2), 454–475 (2021). https://doi.org/10.3758/s13423-020-01825-5

    CrossRef 
    MathSciNet 

    Google Scholar
     

  • Xu, Feiyu, Uszkoreit, Hans, Du, Yangzhou, Fan, Wei, Zhao, Dongyan, Zhu, Jun: Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. In: Tang, Jie, Kan, Min-Yen., Zhao, Dongyan, Li, Sujian, Zan, Hongying (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51

    CrossRef 

    Google Scholar
     

  • Zellers, R., Bisk, Y., Schwartz, R., Choi, Y.: SWAG: A large-scale adversarial dataset for grounded commonsense inference. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 93–104. Association for Computational Linguistics (2018). https://aclanthology.org/D18-1009/

  • Zimmer, M., et al.: Differentiable logic machines. CoRR – Computing Research Repository abs/2102.11529, Cornell University Library (2021). https://arxiv.org/abs/2102.11529, latest revision 2023



  • Source link

    Leave a Reply

    Your email address will not be published. Required fields are marked *