1. Davis, E.; Marcus, G. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM. 2015, 58, 92-103.
2. Zhang, S.; Rudinger, R.; Duh, K.; Van Durme, B. Ordinal common-sense Inference. Trans. Assoc. Comput. Linguist. 2017, 5, 379-95.
3. Yang, P.; Liu, Z.; Li, B.; Zhang, P. Implicit relation inference with deep path extraction for commonsense question answering. Neural. Process. Lett. 2022, 54, 4751-68.
4. Wang, C.; Liu, J.; Liu, J.; Wang, W. Inference of isA commonsense knowledge with lexical taxonomy. Appl. Intell. 2022, 53, 5290-303.
5. Nguyen, T. P.; Razniewski, S.; Romero, J.; Weikum, G. Refined commonsense knowledge from large-scale web contents. IEEE. Trans. Knowl. Data. Eng. 2023, 35, 8431-47.
6. Su, C.; Yu, G.; Wang, J.; Yan, Z.; Cui, L. A review of causality-based fairness machine learning. Intell. Robot. 2022, 2, 244-74.
7. Zhong, X.; Cambria, E. Time expression recognition and normalization: a survey. Artif. Intell. Rev. 2023, 56, 9115-40.
8. Ji, A.; Woo, W. L.; Wong, E. W. L.; Quek, Y. T. Rail track condition monitoring: a review on deep learning approaches. Intell. Robot. 2021, 1, 151-75.
9. Lin, Y.; Xie, Z.; Chen, T.; Cheng, X.; Wen, H. Image privacy protection scheme based on high-quality reconstruction DCT compression and nonlinear dynamics. Expert. Syst. Appl. 2024, 257, 124891.
10. Campos, R.; Dias, G.; Jorge, A. M.; Jatowt, A. Survey of temporal information retrieval and related applications. ACM. Comput. Surv. 2014, 47, 1-41.
11. Zhou, B.; Khashabi, D.; Ning, Q.; Roth, D. "Going on a vacation" takes longer than "Going for a walk": a study of temporal commonsense understanding. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics, 2019; pp. 3363–9.
12. Huang, G.; Min, Z.; Ge, Q.; Yang, Z. Towards document-level event extraction via Binary Contrastive Generation. Knowl. Based. Syst. 2024, 296, 111896.
13. Yang, Z.; Du, X.; Rush, A.; Cardie, C. Improving event duration prediction via time-aware pre-training. In: Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 2020. pp. 3370–8.
14. Zhou, B.; Ning, Q.; Khashabi, D.; Roth, D. Temporal common sense acquisition with minimal supervision. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020; pp. 7579–89.
15. Ning, Q.; Wu, H.; Peng, H.; Roth, D. Improving temporal relation extraction with a globally acquired statistical resource. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana. Association for Computational Linguistics, 2018; pp. 841–51.
16. Lin, S. T.; Chambers, N.; Durrett, G. Conditional generation of temporally-ordered event sequences. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2021; pp. 7142–57.
17. Cole, J. R.; Chaudhary, A.; Dhingra, B.; Talukdar, P. Salient span masking for temporal understanding. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Dubrovnik, Croatia. Association for Computational Linguistics, 2023; pp. 3052–60.
18. Devlin, J.; Chang, M. W.; Lee, K.; Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics, 2019; pp. 4171–86.
19. Liu, Y.; Ott, M.; Goyal, N.; et al. RoBERTa: a robustly optimized BERT pretraining approach. arXiv2019, arXiv: 1907.11692. Available online: https://doi.org/10.48550/arXiv.1907.11692. (accessed 7 Mar 2025).
20. Ribeiro, M. T.; Wu, T.; Guestrin, C.; Singh, S. Beyond accuracy: behavioral testing of NLP models with CheckList. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020; pp. 4902–12.
21. Kaddari, Z.; Mellah, Y.; Berrich, J.; Bouchentouf, T.; Belkasmi, M. G. Applying the T5 language model and duration units normalization to address temporal common sense understanding on the MCTACO dataset. In: 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 09-11 Jun, 2020. IEEE, 2020; pp. 1–4.
22. Raffel, C.; Shazeer, N.; Roberts, A.; et al. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv2019, arXiv: 1910.10683. Available online: https://doi.org/10.48550/arXiv.1910.10683. (accessed 7 Mar 2025).
23. Pereira, L.; Liu, X.; Cheng, F.; Asahara, M.; Kobayashi, I. Adversarial training for commonsense inference. In: Proceedings of the 5th Workshop on Representation Learning for NLP. Online: Association for Computational Linguistics. Association for Computational Linguistics, 2020; pp. 55–60.
24. Pereira, L.; Cheng, F.; Asahara, M.; Kobayashi, I. ALICE++: adversarial training for robust and effective temporal reasoning. In: Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, Shanghai, China. Association for Computational Lingustics, 2021; pp. 373–82. https://aclanthology.org/2021.paclic-1.40/. (accessed 2025-03-07).
25. Kanashiro Pereira, L. Attention-focused adversarial training for robust temporal reasoning. In: Proceedings of the Thirteenth Language Resources and Evaluation Conference, Marseille, France. European Language Resources Association, 2022; pp. 7352–59. https://aclanthology.org/2022.lrec-1.800/. (accessed 2025-03-07).
26. He, P.; Gao, J.; Chen, W. DeBERTaV3: improving DeBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing. arXiv2021, arXiv: 2111.09543. Available online: https://doi.org/10.48550/arXiv.2111.09543. (accessed 7 Mar 2025).
27. Forbes, M.; Choi, Y. Verb physics: relative physical knowledge of actions and objects. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada. Association for Computational Linguistics, 2017; pp. 266–76.
28. Cocos, A.; Wharton, S.; Pavlick, E.; Apidianaki, M.; Callison-Burch, C. Learning scalar adjective intensity from paraphrases. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, 2018; pp. 1752–62.
29. Rashkin, H.; Sap, M.; Allaway, E.; Smith, N. A.; Choi, Y. Event2Mind: commonsense inference on events, intents, and reactions. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia. Association for Computational Linguistics, 2018; pp. 463–73.
30. Zellers, R.; Bisk, Y.; Schwartz, R.; Choi, Y. SWAG: a large-scale adversarial dataset for grounded commonsense inference. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, 2018; pp. 93–104.
31. Ning, Q.; Wu, H.; Roth, D. A multi-axis annotation scheme for event temporal relations. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia. Association for Computational Linguistics, 2018; pp. 1318–28.
32. Vashishtha, S.; Van Durme, B.; White, A. S. Fine-grained temporal relation extraction. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Association for Computational Linguistics, 2019; pp. 2906–19.
33. Ning, Q.; Zhou, B.; Feng, Z.; Peng, H.; Roth, D. CogCompTime: a tool for understanding time in natural language. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Brussels, Belgium. Association for Computational Linguistics, 2018; pp. 72–7.
34. Leeuwenberg, A.; Moens, M. F. Temporal information extraction by predicting relative time-lines. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, 2018; pp. 1237–46.
35. Li, Z.; Ding, X.; Liu, T. Constructing narrative event evolutionary graph for script event prediction. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2018; pp. 4201–7.
36. Williams, J. Extracting fine-grained durations for verbs from Twitter. In: Proceedings of ACL 2012 Student Research Workshop, Jeju Island, Korea. Association for Computational Linguistics, 2012; pp. 49–54. https://aclanthology.org/W12-3309/. (accessed 2025-03-07).
37. Vempala, A.; Blanco, E.; Palmer, A. Determining event durations: models and error analysis. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana. Association for Computational Linguistics, 2018; pp. 164–8.
38. Clark, P.; Cowhey, I.; Etzioni, O.; et al. Think you have solved question answering? Try ARC, the AI2 reasoning challenge. arXiv2018, arXiv: 1803.05457. Available online: https://doi.org/10.48550/arXiv.1803.05457. (accessed 7 Mar 2025).
39. Ostermann, S.; Roth, M.; Modi, A.; Thater, S.; Pinkal, M. SemEval-2018 Task 11: machine comprehension using commonsense knowledge. In: Proceedings of the 12th International Workshop on Semantic Evaluation, New Orleans, Louisiana. Association for Computational Linguistics, 2018; pp. 747–57.
40. Merkhofer, E.; Henderson, J.; Bloom, D.; Strickhart, L.; Zarrella, G. MITRE at SemEval-2018 Task 11: commonsense reasoning without commonsense knowledge. In: Proceedings of the 12th International Workshop on Semantic Evaluation, New Orleans, Louisiana. Association for Computational Linguistics, 2018; pp. 1078–82.
41. Mostafazadeh, N.; Chambers, N.; He, X.; et al. A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California. Association for Computational Linguistics, 2016; pp. 839–49.
42. Ning, Q.; Wu, H.; Han, R.; Peng, N.; Gardner, M.; Roth, D. TORQUE: a reading comprehension dataset of temporal ordering questions. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020; pp. 1158–72.
43. Qin, L.; Gupta, A.; Upadhyay, S.; He, L.; Choi, Y.; Faruqui, M. TIMEDIAL: temporal commonsense reasoning in dialog. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2021; pp. 7066–76.
44. Yin, S.; Xiang, Z. A hyper-heuristic algorithm via proximal policy optimization for multi-objective truss problems. Expert. Syst. Appl. 2024, 256, 124929.
45. Kobayashi, S. Contextual augmentation: data augmentation by words with paradigmatic relations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana. Association for Computational Linguistics, 2018; pp. 452–7.
46. Zhang, X.; Zhao, J.; LeCun, Y. Character-level convolutional networks for text classification. arXiv2015, arXiv: 1509.01626. Available online: https://doi.org/10.48550/arXiv.1509.01626. (accessed 7 Mar 2025).
47. Wang, W. Y.; Yang, D. That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal. Association for Computational Linguistics, 2015; pp. 2557–63.
48. Sennrich, R.; Haddow, B.; Birch, A. Improving neural machine translation models with monolingual data. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany. Association for Computational Linguistics, 2016; pp. 86–96.
49. Yu, A. W.; Dohan, D.; Luong, M.; et al. QANet: combining local convolution with global self-attention for reading comprehension. arXiv2018, arXiv: 1804.09541. Available online: https://doi.org/10.48550/arXiv.1804.09541. (accessed 7 Mar 2025).
50. Fadaee, M.; Monz, C. Back-translation sampling by targeting difficult words in neural machine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, 2018; pp. 436–46.
51. Sugiyama, A.; Yoshinaga, N. Data augmentation using back-translation for context-aware neural machine translation. In: Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), Hong Kong, China. Association for Computational Linguistics, 2019; pp. 35–44.
52. He, P.; Liu, X.; Gao, J.; Chen, W. DeBERTa: decoding-enhanced BERT with disentangled attention. arXiv2020, arXiv: 2006.03654. Available online: https://doi.org/10.48550/arXiv.2006.03654. (accessed 7 Mar 2025).
53. Clark, K.; Luong, M. T.; Le, Q. V.; Manning, C. D. ELECTRA: pre-training text encoders as discriminators rather than generators. arXiv2020, arXiv: 2003.10555. Available online: https://doi.org/10.48550/arXiv.2003.10555. (accessed 7 Mar 2025).
54. Hadsell, R.; Rao, D.; Rusu, A.; Pascanu, R. Embracing change: continual learning in deep neural networks. Trends. Cogn. Sci. 2020, 24, 1028-40.
55. Wei, J.; Zou, K. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics, 2019; pp. 6383–9.
56. Tiedemann, J.; Thottingal, S. OPUS-MT - building open translation services for the world. In: Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, Lisboa, Portugal. European Association for Machine Translation, 2020; pp. 479–80. https://aclanthology.org/2020.eamt-1.61/. (accessed 2025-03-07).
58. Song, X.; Salcianu, A.; Song, Y.; Dopson, D.; Zhou, D. Fast WordPiece tokenization. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2021. pp. 2089–103.
59. Kimura, M.; Kanashiro Pereira, L.; Kobayashi, I. Toward building a language model for understanding temporal commonsense. In: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop. Association for Computational Linguistics, 2022; pp. 17–24.
Comments
Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.