REFERENCES

1. Mukherjee, D.; Gupta, K.; Chang, L. H.; Najjaran, H. A survey of robot learning strategies for human-robot collaboration in industrial settings. Robot. Cim. Int. Manuf. 2022, 73, 102231.

2. Sánchez-Ibáñez, J. R.; Pérez-Del-Pulgar, C. J.; García-Cerezo, A. Path planning for autonomous mobile robots: a review. Sensors 2021, 21, 7898.

3. Moshayedi, A. J.; Roy, A. S.; Liao, L.; Khan, A. S.; Kolahdooz, A.; Eftekhari, A. Design and development of FOODIEBOT robot: from simulation to design. IEEE. Access. 2024, 12, 36148-72.

4. Ni, J.; Chen, Y.; Tang, G.; Shi, J.; Cao, W.; Shi, P. Deep learning-based scene understanding for autonomous robots: a survey. Intell. Robot. 2023, 3, 374-401.

5. Zhou, C.; Huang, B.; Fränti, P. A review of motion planning algorithms for intelligent robots. J. Intell. Manuf. 2022, 33, 387-424.

6. Zhang, Y.; Tian, G.; Shao, X. Safe and efficient robot manipulation: task-oriented environment modeling and object pose estimation. IEEE. Trans. Instrum. Meas. 2021, 70, 1-12.

7. Church, K. W. Word2Vec. Nat. Lang. Eng. 2017, 23, 155-62.

8. Pennington, J.; Socher, R.; Manning, C. D. Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP); 2014 Oct 25-29; Doha, Qatar. 2014, pp. 1532-43. Available from: https://aclanthology.org/D14-1162.pdf. (accessed 2025-01-21)

9. Weld, H.; Huang, X.; Long, S.; Poon, J.; Han, S. C. A survey of joint intent detection and slot filling models in natural language understanding. ACM. Comput. Surv. 2023, 55, 1-38.

10. Iyyer, M.; Manjunatha, V.; Boyd-Graber, J.; Daumé, H. I. I. I. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing; 2015 Jul 26-31; Beijing, China. 2015, pp. 1681-91.

11. Jiang, C.; Xu, Q.; Song, Y.; Yuan, X.; Pang, B.; Li, Y. Discrete sequence rearrangement based self-supervised chinese named entity recognition for robot instruction parsing. Intell. Robot. 2023, 3, 337-54.

12. Kleenankandy, J.; K, A. A. N. An enhanced Tree-LSTM architecture for sentence semantic modeling using typed dependencies. Inform. Process. Manag. 2020, 57, 102362.

13. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE. 1998, 86, 2278-324.

14. Ayetiran, E. F. Attention-based aspect sentiment classification using enhanced learning through cnn-Bilstm networks. Knowl. Based. Syst. 2022, 252, 109409.

15. Vaswani, A.; Shazeer, N.; Parmar, N.; et al. Attention is all you need. arXiv2024, arXiv:1706.03762. Available online: https://doi.org/10.48550/arXiv.1706.03762 (accessed 21 Jan 2025)

16. Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. J Mach Learn Res 2011;12:2493-537. Available from: https://www.jmlr.org/papers/volume12/collobert11a/collobert11a.pdf?source. (accessed 2025-01-21)

17. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural. Comput. 2019, 31, 1235-70.

18. Devlin, J.; Chang, M. W.; Lee, K.; Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv2024, arXiv:1810.04805. Available online: https://doi.org/10.48550/arXiv.1810.04805 (accessed 21 Jan 2025)

19. Achiam, J.; Adler, S.; Agarwal, S.; et al. Gpt-4 technical report. arXiv2024, arXiv:2303.08774. Available online: https://doi.org/10.48550/arXiv.2303.08774 (accessed 21 Jan 2025)

20. Zhang, C.; Chen, J.; Li, J.; Peng, Y.; Mao, Z. Large language models for human–robot interaction: a review. Biomim. Intell. Robot. 2023, 3, 100131.

21. Liu, B.; Lane, I. Attention-Based recurrent neural network models for joint intent detection and slot filling. Interspeech 2016, 2016, 685-9.

22. Goo, C. W.; Gao, G.; Hsu, Y. K.; et al. Slot-gated modeling for joint slot filling and intent prediction. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2018 Jun 1-6; New Orleans, Louisiana, USA. 2018, pp. 753-7.

23. Abro, W. A.; Qi, G.; Aamir, M.; Ali, Z. Joint intent detection and slot filling using weighted finite state transducer and BERT. Appl. Intell. 2022, 52, 17356-70.

24. Qin, L.; Liu, T.; Che, W.; Kang, B.; Zhao, S.; Liu, T. A co-interactive transformer for joint slot filling and intent detection. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021 Jun 6-11; Toronto, Canada. IEEE; 2021. pp. 8193-7.

25. Cheng, L.; Jia, W.; Yang, W. An effective non-autoregressive model for spoken language understanding. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management; 2021 Nov 1-5; Boise, USA. Association for Computing Machinery; 2021, pp. 241-50.

26. He, T.; Xu, X.; Wu, Y.; Wang, H.; Chen, J. Multitask learning with knowledge base for joint intent detection and slot filling. Appl. Sci. 2021, 11, 4887.

27. Rajapaksha, U. U. S.; Jayawardena, C. Ontology based optimized algorithms to communicate with a service robot using a user command with unknown terms. In: 2020 2nd International Conference on Advancements in Computing (ICAC); 2020 Dec 10-11; Malabe, Sri Lanka. IEEE; 2020. pp. 258-62.

28. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE. Trans. Pattern. Anal. Mach. Intell. 2017, 39, 1137-49.

29. Duan, H.; Yang, Y.; Li, D.; Wang, P. Human–robot object handover: recent progress and future direction. Biomim. Intell. Robot. 2024, 4, 100145.

30. Wang, C. Y.; Yeh, I. H.; Liao, H. Y. M. YOLOv9: learning what you want to learn using programmable gradient information. arXiv2024, arXiv:2402.13616. Available online: https://doi.org/10.48550/arXiv.2402.13616 (accessed 21 Jan 2025)

31. Zhang, Y.; Yin, M.; Wang, H.; Hua, C. Cross-level multi-modal features learning with transformer for RGB-D object recognition. IEEE. Trans. Circuits. Syst. Video. Technol. 2023, 33, 7121-30.

32. Liu, W.; Anguelov, D.; Erhan, D.; et al. SSD: Single shot multibox detector. In: Proceedings of the Computer Vision–ECCV 2016: 14th European Conference; 2016 October 11-14; Amsterdam, the Netherlands. Springer International Publishing; 2016, pp. 21-37.

33. Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. In: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW); 2021 Oct 11-17; Montreal, Canada. IEEE; 2021. pp. 2778-88.

34. Zhang, Y.; Tian, G.; Chen, H. Exploring the cognitive process for service task in smart home: a robot service mechanism. Future. Gener. Comput. Syst. 2020, 102, 588-602.

35. Xu, D.; Zhu, Y.; Choy, C. B.; Li, F. F. Scene graph generation by iterative message passing. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21-26; Honolulu, USA. IEEE; 2017. pp. 5410-9.

36. Dai, B.; Zhang, Y.; Lin, D. Detecting visual relationships with deep relational networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21-26; Honolulu, USA. IEEE; 2017. pp. 3076-86.

37. Liu, A. A.; Tian, H.; Xu, N.; Nie, W.; Zhang, Y.; Kankanhalli, M. Toward region-aware attention learning for scene graph generation. IEEE. Trans. Neural. Netw. Learn. Syst. 2022, 33, 7655-66.

38. Zellers, R.; Yatskar, M.; Thomson, S.; Choi, Y. Neural motifs: scene graph parsing with global context. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun 18-23; Salt Lake City, USA. IEEE; 2018. pp. 5831-40.

39. Zhang, M.; Tian, G.; Zhang, Y.; Liu, H. Sequential learning for ingredient recognition from images. IEEE. Trans. Circuits. Syst. Video. Technol. 2023, 33, 2162-75.

40. Zhou, F.; Liu, H.; Zhao, H.; Liang, L. Long-term object search using incremental scene graph updating. Robotica 2023, 41, 962-75.

41. Riaz, H.; Terra, A.; Raizer, K.; Inam, R.; Hata, A. Scene understanding for safety analysis in human-robot collaborative operations. In: 2020 6th International Conference on Control, Automation and Robotics (ICCAR); 2020 Apr 20-23; Singapore. IEEE; 2020. pp. 722-31.

42. Jiao, Z.; Niu, Y.; Zhang, Z.; Zhu, S. C.; Zhu, Y.; Liu, H. Sequential manipulation planning on scene graph. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 8203-10.

43. Antol, S.; Agrawal, A.; Lu, J.; et al. Vqa: visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision; 2015 Dec 7-13; Santiago, Chile. IEEE; 2015. pp. 2425-33.

44. Li, G.; Wang, X.; Zhu, W. Boosting visual question answering with context-aware knowledge aggregation. In: Proceedings of the 28th ACM International Conference on Multimedia; 2020 Oct 12-16; Melbourne, Australia. Association for Computing Machinery; 2020. pp. 1227-35.

45. Wang, P.; Yang, A.; Men, R.; et al. OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In: Proceedings of the 39th International Conference on Machine Learning; Baltimore, Maryland. 2022. pp. 23318-40. Available from: https://proceedings.mlr.press/v162/wang22al.html. (accessed 2025-01-21)

46. Lu, P.; Ji, L.; Zhang, W.; Duan, N.; Zhou, M.; Wang, J. R-VQA: learning visual relation facts with semantic attention for visual question answering. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; 2018 Aug 19-23; New York, USA. Association for Computing Machinery; 2018. pp. 1880-9.

47. Yu, T.; Yu, J.; Yu, Z.; Huang, Q.; Tian, Q. Long-term video question answering via multimodal hierarchical memory attentive networks. IEEE. Trans. Circuits. Syst. Video. Technol. 2021, 31, 931-44.

48. Kenfack, F. K.; Siddiky, F. A.; Balint-Benczedi, F.; Beetz, M. Robotvqa - a scene-graph-and deep-learning-based visual question answering system for robot manipulation. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2020 Oct 24 - 2021 Jan 24; Las Vegas, USA. IEEE; 2020. pp. 9667-74.

49. Das, A.; Gkioxari, G.; Lee, S.; Parikh, D.; Batra, D. Neural modular control for embodied question answering. arXiv2024, arXiv:1810.11181. Available online: https://doi.org/10.48550/arXiv.1810.11181 (accessed 21 Jan 2025)

50. Luo, H.; Lin, G.; Yao, Y.; Liu, F.; Liu, Z.; Tang, Z. Depth and video segmentation based visual attention for embodied question answering. IEEE. Trans. Pattern. Anal. Mach. Intell. 2023, 45, 6807-19.

51. Chen, Z.; Huang, Y.; Chen, J.; et al. LAKO: knowledge-driven visual question answering via late knowledge-to-text injection. arXiv2024, arXiv:2207.12888. Available online: https://doi.org/10.48550/arXiv.2207.12888 (accessed 21 Jan 2025)

52. Teney, D.; Liu, L.; van, D. H. A. Graph-structured representations for visual question answering. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21-26; Honolulu, USA. IEEE; 2017. pp. 1-9.

53. Norcliffe-Brown, W.; Vafeias, E.; Parisot, S. Learning conditioned graph structures for interpretable visual question answering. arXiv2024, arXiv:1806.07243. Available online: https://doi.org/10.48550/arXiv.1806.07243 (accessed 21 Jan 2025)

54. Li, J.; Li, D.; Savarese, S.; Hoi, S. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv2024, arXiv:2301.12597. Available online: https://doi.org/10.48550/arXiv.2301.12597 (accessed 21 Jan 2025)

55. Xiao, B.; Wu, H.; Xu, W.; et al. Florence-2: advancing a unified representation for a variety of vision tasks. In: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2024 Jun 16-22; Seattle, USA. IEEE; 2024. pp. 4818-29.

56. Bhatti, U. A.; Tang, H.; Wu, G.; Marjan, S.; Hussain, A.; Sarker, S. K. Deep learning with graph convolutional networks: an overview and latest applications in computational intelligence. Int. J. Intell. Syst. 2023, 2023, 8342104.

57. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv2024, arXiv:1710.10903. Available online: https://doi.org/10.48550/arXiv.1710.10903 (accessed 21 Jan 2025)

58. Chen, Z. M.; Wei, X. S.; Wang, P.; Guo, Y. Multi-label image recognition with graph convolutional networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15-20; Long Beach, USA. IEEE; 2019. pp. 5177-86.

59. Mo, S.; Cai, M.; Lin, L.; et al. Mutual information-based graph co-attention networks for multimodal prior-guided magnetic resonance imaging segmentation. IEEE. Trans. Circuits. Syst. Video. Technol. 2022, 32, 2512-26.

60. Li, K.; Zhang, Y.; Li, K.; Li, Y.; Fu, Y. Visual semantic reasoning for image-text matching. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019 Oct 27 - Nov 2; Seoul, South Korea. IEEE; 2019. pp. 4654-62.

61. Yang, J.; Gao, X.; Li, L.; Wang, X.; Ding, J. SOLVER: scene-object interrelated visual emotion reasoning network. IEEE. Trans. Image. Process. 2021, 30, 8686-701.

62. Liang, Z.; Liu, J.; Guan, Y.; Rojas, J. Visual-semantic graph attention networks for human-object interaction detection. In: 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO); 2021 Dec 27-31; Sanya, China. IEEE; 2021. pp. 1441-7.

63. Xie, J.; Fang, W.; Cai, Y.; Huang, Q.; Li, Q. Knowledge-based visual question generation. IEEE. Trans. Circuits. Syst. Video. Technol. 2022, 32, 7547-58.

64. Lyu, Z.; Wu, Y.; Lai, J.; Yang, M.; Li, C.; Zhou, W. Knowledge enhanced graph neural networks for explainable recommendation. IEEE. Trans. Knowl. Data. Eng. 2023, 35, 4954-68.

65. Zhang, L.; Wang, S.; Liu, J.; et al. MuL-GRN: multi-level graph relation network for few-shot node classification. IEEE. Trans. Knowl. Data. Eng. 2023, 35, 6085-98.

66. Huang, M.; Hou, C.; Yang, Q.; Wang, Z. Reasoning and tuning: graph attention network for occluded person re-identification. IEEE. Trans. Image. Process. 2023, 32, 1568-82.

67. Cui, Y.; Tian, G.; Jiang, Z.; Zhang, M.; Gu, Y.; Wang, Y. An active task cognition method for home service robot using multi-graph attention fusion mechanism. IEEE. Trans. Circuits. Syst. Video. Technol. 2024, 34, 4957-72.

68. Ghallab, M.; Knoblock, C.; Wilkins, D.; et al. PDDL - The planning domain definition language. Washington: University of Washington Press; 1998. pp. 1-27. Available from: https://www.researchgate.net/publication/2278933_PDDL_-_The_Planning_Domain_Definition_Language. (accessed 2025-01-21).

69. Lee, J.; Lifschitz, V.; Yang, F. Action language BC: preliminary report. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence; Beijing, China, 2013; pp. 983-9. Available from: https://www.ijcai.org/Proceedings/13/Papers/150.pdf. (accessed 2025-01-21)

70. Khandelwal, P.; Yang, F.; Leonetti, M.; Lifschitz, V.; Stone, P. Planning in action language BC while learning action costs for mobile robots. ICAPS. 2014, 24, 472-80.

71. Savage, J.; Rosenblueth, D. A.; Matamoros, M.; et al. Semantic reasoning in service robots using expert systems. Robot. Auton. Syst. 2019, 114, 77-92.

72. Wang, Z.; Tian, G.; Shao, X. Home service robot task planning using semantic knowledge and probabilistic inference. Knowl. Based. Syst. 2020, 204, 106174.

73. Wang, C.; Xu, D.; Li, F. F. Generalizable task planning through representation pretraining. IEEE. Robot. Autom. Lett. 2022, 7, 8299-306.

74. Adu-Bredu, A.; Zeng, Z.; Pusalkar, N.; Jenkins, O. C. Elephants don’t pack groceries: robot task planning for low entropy belief states. IEEE. Robot. Autom. Lett. 2022, 7, 25-32.

75. Bustamante, S.; Quere, G.; Leidner, D.; Vogel, J.; Stulp, F. CATs: task planning for shared control of assistive robots with variable autonomy. In: 2022 International Conference on Robotics and Automation (ICRA); 2022 May 23-27; Philadelphia, USA. IEEE; 2022. pp. 3775-82.

76. Adu-Bredu, A.; Devraj, N.; Jenkins, O. C. Optimal constrained task planning as mixed integer programming. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 12029-36.

77. Wang, Z.; Gan, Y.; Dai, X. Assembly-oriented task sequence planning for a dual-arm robot. IEEE. Robot. Autom. Lett. 2022, 7, 8455-62.

78. Bernardo, R.; Sousa, J. M.; Gonçalves, P. J. Planning robotic agent actions using semantic knowledge for a home environment. Intell. Robot. 2021, 1, 101-5.

79. Yano, T.; Ito, K. Goal-oriented task planning for composable robot control system using module-based control-as-inference framework. In: 2024 IEEE/SICE International Symposium on System Integration (SII); 2024 Jan 8-11; Ha Long, Vietnam. IEEE; 2024. pp. 1219-26.

80. Zeng, F.; Shirafuji, S.; Fan, C.; Nishio, M.; Ota, J. Stepwise large-scale multi-agent task planning using neighborhood search. IEEE. Robot. Autom. Lett. 2024, 9, 111-8.

81. Li, S.; Wei, M.; Li, S.; Yin, X. Temporal logic task planning for autonomous systems with active acquisition of information. IEEE. Trans. Intell. Veh. 2024, 9, 1436-49.

82. Noormohammadi-Asl, A.; Smith, S. L.; Dautenhahn, K. To lead or to follow? Adaptive robot task planning in human-robot collaboration. arXiv2024, arXiv:2401.01483. Available online: https://doi.org/10.48550/arXiv.2401.01483 (accessed 21 Jan 2025)

83. Berger, C.; Doherty, P.; Rudol, P.; Wzorek, M. Leveraging active queries in collaborative robotic mission planning. Intell. Robot. 2024, 4, 87-106.

84. Zhang, Y.; Tian, G.; Shao, X.; Zhang, M.; Liu, S. Semantic grounding for long-term autonomy of mobile robots toward dynamic object search in home environments. IEEE. Trans. Ind. Electron. 2023, 70, 1655-65.

85. Zhang, Y.; Tian, G.; Shao, X.; Cheng, J. Effective safety strategy for mobile robots based on laser-visual fusion in home environments. IEEE. Trans. Syst. Man. Cybern. Syst. 2022, 52, 4138-50.

86. Singh, I.; Blukis, V.; Mousavian, A.; et al. Progprompt: generating situated robot task plans using large language models. In: 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023 May 29 - Jun 02; London, UK. IEEE; 2023. pp. 11523-30.

87. Wang, L.; Ma, C.; Feng, X.; et al. A survey on large language model based autonomous agents. Front. Comput. Sci. 2024, 18, 40231.

88. Ding, Y.; Zhang, X.; Amiri, S.; et al. Integrating action knowledge and LLMs for task planning and situation handling in open worlds. Auton. Robot. 2023, 47, 981-97.

89. Pallagani, V.; Muppasani, B. C.; Roy, K.; et al. On the prospects of incorporating large language models (LLMs) in automated planning and scheduling (APS). ICAPS. 2024, 34, 432-44.

90. Sarch, G.; Wu, Y.; Tarr, M. J.; Fragkiadaki, K. Open-ended instructable embodied agents with memory-augmented large language models. arXiv2024, arXiv:2301.15127. Available online: https://doi.org/10.48550/arXiv.2310.15127 (accessed 21 Jan 2025)

91. Lin, B. Y.; Huang, C.; Liu, Q.; Gu, W.; Sommerer, S.; Ren, X. On grounded planning for embodied tasks with language models. AAAI. 2023, 37, 13192-200.

92. Akiyama, S.; Dossa, R. F.; Arulkumaran, K.; Sujit, S.; Johns, E. Open-loop VLM robot planning: an investigation of fine-tuning and prompt engineering strategies. In: Proceedings of the First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA 2024; Yokohama, Japan. 2024. pp. 1-6. Available from: https://openreview.net/forum?id=JXngwwPMR5. (accessed 2025-01-21)

93. Chalvatzaki, G.; Younes, A.; Nandha, D.; Le, A. T.; Ribeiro, L. F. R.; Gurevych, I. Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning. Front. Robot. AI. 2023, 10, 1221739.

94. Rana, K.; Haviland, J.; Garg, S.; Abou-Chakra, J.; Reid, I. D.; Suenderhauf, N. SAYPLAN: grounding large language models using 3D scene graphs for scalable robot task planning. arXiv2024, arXiv:2307.06135. Available online: https://doi.org/10.48550/arXiv.2307.06135 (accessed 21 Jan 2025)

95. Agia, C.; Jatavallabhula, K. M.; Khodeir, M.; et al. TASKOGRAPHY: evaluating robot task planning over large 3D scene graphs. arXiv2024, arXiv:2207.05006. Available online: https://doi.org/10.48550/arXiv.2207.05006 (accessed 21 Jan 2025)

96. Immorlica, N. Technical perspective: a graph-theoretic framework traces task planning. Commun. ACM. 2018, 61, 98-98.

97. Chen, T.; Chen, R.; Nie, L.; Luo, X.; Liu, X.; Lin, L. Neural task planning with AND–OR graph representations. IEEE. Trans. Multimed. 2019, 21, 1022-34.

98. Kortik, S.; Saranli, U. LinGraph: a graph-based automated planner for concurrent task planning based on linear logic. Appl. Intell. 2017, 47, 914-34.

99. Sellers, T.; Lei, T.; Luo, C.; Jan, G. E.; Junfeng, M. A node selection algorithm to graph-based multi-waypoint optimization navigation and mapping. Intell. Robot. 2022, 2, 333-54.

100. Odense, S.; Gupta, K.; Macready, W. G. Neural-guided runtime prediction of planners for improved motion and task planning with graph neural networks. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 12471-8.

101. Kan, X.; Thayer, T. C.; Carpin, S.; Karydis, K. Task planning on stochastic aisle graphs for precision agriculture. IEEE. Robot. Autom. Lett. 2021, 6, 3287-94.

102. Mirakhor, K.; Ghosh, S.; Das, D.; Bhowmick, B. Task planning for object rearrangement in multi-room environments. AAAI. 2024, 38, 10350-7.

103. Saucedo, M. A. V.; Patel, A.; Saradagi, A.; Kanellakis, C.; Nikolakopoulos, G. Belief scene graphs: expanding partial scenes with objects through computation of expectation. In: 2024 IEEE International Conference on Robotics and Automation (ICRA); 2024 May 13-17; Yokohama, Japan. IEEE; 2024. pp. 9441-7.

104. Souza, C.; Velhor, L. Deep reinforcement learning for task planning of virtual characters. Intell. Comput. 2021, 284, 694-711.

105. Liu, G.; de, W. J.; Steckelmacher, D.; Hota, R. K.; Nowe, A.; Vanderborght, B. Synergistic task and motion planning with reinforcement learning-based non-prehensile actions. IEEE. Robot. Autom. Lett. 2023, 8, 2764-71.

106. Liu, G.; de, W. J.; Durodié, Y.; Steckelmacher, D.; Nowe, A.; Vanderborght, B. Optimistic reinforcement learning-based skill insertions for task and motion planning. IEEE. Robot. Autom. Lett. 2024, 9, 5974-81.

107. Li, T.; Xie, F.; Qiu, Q.; Feng, Q. Multi-arm robot task planning for fruit harvesting using multi-agent reinforcement learning. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2023 Oct 1-5; Detroit, USA. IEEE; 2023. pp. 4176-83.

108. Wete, E.; Greenyer, J.; Kudenko, D.; Nejdl, W. Multi-robot motion and task planning in automotive production using controller-based safe reinforcement learning. In: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems; Auckland, New Zealand. 2024. pp. 1928-37. Available from: http://jgreen.de/wp-content/documents/2024/AAMAS_24_Multi-Robot-Motion-and-Task-Planning-in-Automotive-Production-Using-Controller-based-Safe-Reinforcement-Learning.pdf. (accessed 2025-01-21)

109. Liu, Y.; Palmieri, L.; Georgievski, I.; Aiello, M. Human-flow-aware long-term mobile robot task planning based on hierarchical reinforcement learning. IEEE. Robot. Autom. Lett. 2023, 8, 4068-75.

110. Li, X.; Yang, Y.; Wang, Q.; et al. A distributed multi-vehicle pursuit scheme: generative multi-adversarial reinforcement learning. Intell. Robot. 2023, 3, 436-52.

111. Li, D.; Hou, Q.; Zhao, M.; Wu, Z. Reliable task planning of networked devices as a multi-objective problem using NSGA-II and reinforcement learning. IEEE. Access. 2022, 10, 6684-95.

112. Zhang, J.; Ren, J.; Cui, Y.; Fu, D.; Cong, J. Multi-USV task planning method based on improved deep reinforcement learning. IEEE. Internet. Things. J. 2024, 11, 18549-67.

Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/