Contenido principal del artículo

Autores

Se desarrolló un sistema interacción humano-robot multimodal (gestos y voz) que permite a usuarios enseñarle tareas de clasificación de cubos por color a un robot. La evaluación del sistema fue realizada por siete usuarios de forma cuantitativa y cualitativa. En las pruebas cuantitativas se evaluó un total de 63 interacciones verbales, 252 interacciones gestuales, y 63 multimodales. El porcentaje de reconocimiento de las interacciones fue del 98.41% para los comandos de voz, 81.35 % para los gestuales, y 80.95% para las multimodales. Luego del aprendizaje, el robot fue capaz de realizar correctamente la tarea de clasificación de cubos por color en un 100%; siendo capaz de responder exitosamente ante condiciones iniciales (ubicaciones y cantidad de cubos) no enseñadas previamente. La evaluación cualitativa del sistema se realizó para conocer la percepción de los usuarios, arrojando resultados consistentes con los porcentajes de reconocimiento, favoreciendo la interacción verbal sobre la multimodal.

1.
Nope Rodríguez SE, Mosquera-DeLaCruz JH, Martínez-Álvarez A, Loaiza-Correa H, Rodríguez-Téllez GA, Jamioy-Cabrera JD, Delgado-Giraldo MDL Ángeles, Penagos-Angrino JF. Sistema de interacción humano-robot para la enseñanza-aprendizaje de una tarea de ordenamiento de objetos mediante comunicación verbal y gestual. inycomp [Internet]. 29 de noviembre de 2023 [citado 28 de abril de 2024];25(Suplemento):e-20613133. Disponible en: https://revistaingenieria.univalle.edu.co/index.php/ingenieria_y_competitividad/article/view/13133

Billard A, Ravichandar H, Polydoros AS, Chernova S. Recent Advances in Robot Learning from Demonstration. Annu Rev Control Robot Auton Syst. 2020;3(1):297–330. DOI: https://doi.org/10.1146/annurev-control-100819-063206

Drolshagen S, Pfingsthorn M, Gliesche P, Hein A. Acceptance of Industrial Collaborative Robots by People With Disabilities in Sheltered Workshops. 2021;7(January). DOI: https://doi.org/10.3389/frobt.2020.541741

Haage M, Piperagkas G, Papadopoulos C, Mariolis I, Malec J, Bekiroglu Y, et al. Teaching Assembly by Demonstration Using Advanced Human Robot Interaction and a Knowledge Integration Framework. Procedia Manuf. 2017;11(June):164–73. DOI: https://doi.org/10.1016/j.promfg.2017.07.221

So W, Wong MK, Lam CK, Lam W, Chui AT, Lee T, et al. Using a social robot to teach gestural recognition and production in children with autism spectrum disorders. Disability and Rehabilitation: Assistive Technology. 2017; DOI: https://doi.org/10.1080/17483107.2017.1344886

Lázaro-Gredilla M, Lin D, Swaroop Guntupalli J, George D. Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs. Sci Robot. 2019;4(26):1–16. DOI: https://doi.org/10.1126/scirobotics.aav3150

Mukherjee D, Gupta K, Chang LH, Najjaran H. A Survey of Robot Learning Strategies for Human-Robot Collaboration in Industrial Settings. Robot Comput Integr Manuf [Internet]. 2022;73(July 2021):102231. Available from: https://doi.org/10.1016/j.rcim.2021.102231 DOI: https://doi.org/10.1016/j.rcim.2021.102231

Li S, Zheng P, Fan J, Wang L. Toward Proactive Human – Robot Collaborative Assembly : A Multimodal Transfer-Learning-Enabled Action Prediction Approach. 2022;69(8):8579–88. DOI: https://doi.org/10.1109/TIE.2021.3105977

Mosquera-DeLaCruz J-H, Nope-Rodríguez S-E, Restrepo-Girón A-D, Martínez-Álvarez A, Loaiza-Correa H. Disability and Rehabilitation : Assistive Technology Human-computer multimodal interface to internet navigation. Disabil Rehabil Assist Technol. 2020;0(0):1–14, https://doi.org/10.1080/17483107.2020.179944.

Kotseruba I, Tsotsos JK. 40 years of cognitive architectures : core cognitive abilities and practical applications [Internet]. Vol. 53, Artificial Intelligence Review. Springer Netherlands; 2018. 17–94 p. Available from: https://doi.org/10.1007/s10462-018-9646-y DOI: https://doi.org/10.1007/s10462-018-9646-y

Das N, Prakash R, Behera L. Learning object manipulation from demonstration through vision for the 7-DOF barrett WAM. 2016 IEEE 1st Int Conf Control Meas Instrumentation, C 2016. 2016;(Cmi):391–6. DOI: https://doi.org/10.1109/CMI.2016.7413777

Du G, Chen M, Liu C, Zhang B, Zhang P. Online robot teaching with natural human-robot interaction. IEEE Trans Ind Electron. 2018;65(12):9571–81. DOI: https://doi.org/10.1109/TIE.2018.2823667

Argall BD, Chernova S, Veloso M, Browning B. A survey of robot learning from demonstration. Rob Auton Syst. 2009;57(5):469–83. DOI: https://doi.org/10.1016/j.robot.2008.10.024

Hausman K, Chebotar Y, Schaal S, Sukhatme G, Lim JJ. Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. Adv Neural Inf Process Syst. 2017;2017-Decem:1236–46.

Gonzalez-Fierro M, Balaguer C, Swann N, Nanayakkara T. A humanoid robot standing up through learning from demonstration using a multimodal reward function. In: 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids). IEEE; 2013. p. 74–9. DOI: https://doi.org/10.1109/HUMANOIDS.2013.7029958

Mayer RE. Thirty years of research on online learning. 2019;(October 2018):152–9. DOI: https://doi.org/10.1002/acp.3482

Laird JE, Lebiere C, Rosenbloom PS. A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. AI Mag. 2017;38(4):13–26. DOI: https://doi.org/10.1609/aimag.v38i4.2744

Choi D, Langley P. Evolution of the ICARUS Cognitive Architecture. Cogn Syst Res [Internet]. 2018;48:25–38. Available from: https://doi.org/10.1016/j.cogsys.2017.05.005 DOI: https://doi.org/10.1016/j.cogsys.2017.05.005

Laird JE. The SOAR of cognitive architecture. Proceedings of the 2013 International Conference on Current Trends in Information Technology, CTIT 2013. 2013. 135–142 p.

Abbasi B, Monaikul N, Rysbek Z, Eugenio B Di. A Multimodal Human-Robot Interaction Manager for Assistive Robots. 2019;6756–62. DOI: https://doi.org/10.1109/IROS40897.2019.8968505

Chen L, Javaid M, Eugenio B Di. The roles and recognition of Haptic-Ostensive actions in collaborative multimodal human – human dialogues ଝ. 2015;34:201–31. DOI: https://doi.org/10.1016/j.csl.2015.03.010

Monaikul N, Abbasi B, Rysbek Z, Eugenio B Di. Role Switching in Task-Oriented Multimodal Human-Robot Collaboration. 2020;1150–6. DOI: https://doi.org/10.1109/RO-MAN47096.2020.9223461

Male J, Martinez-hernandez U. Collaborative architecture for human-robot assembly tasks using multimodal sensors *. 2021;1024–9. DOI: https://doi.org/10.1109/ICAR53236.2021.9659382

Billard AG, Calinon S, Dillmann R. Learning from Humans. Springer Handb Robot. 2016;Pages 1995-2014. DOI: https://doi.org/10.1007/978-3-319-32552-1_74

Pypi.org. PyAudio 0.2.13 [Internet]. 2022 [cited 2023 Jan 18]. Available from: https://pypi.org/project/PyAudio/

Pypi.org. Python Speech Recognition 3.9.0 [Internet]. 2022 [cited 2023 Jan 18]. Available from: https://pypi.org/project/SpeechRecognition/

Google LLC. Language model selection for speech-to-text conversion [Internet]. 2023 [cited 2023 Mar 29]. Available from: https://patents.google.com/patent/US9495127B2/en

Pypi.org. OpenCV-Python 4.7.0.68 [Internet]. 2022 [cited 2023 Jan 18]. Available from: https://pypi.org/project/opencv-python/

Google LLC. Mediapipe Hands [Internet]. 2022 [cited 2023 Jan 18]. Available from: https://google.github.io/mediapipe/solutions/hands

Majumder N, Hazarika D, Gelbukh A, Cambria E, Poria S. Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-Based Syst [Internet]. 2018;161:124–33. Available from: https://doi.org/10.1016/j.knosys.2018.07.041 DOI: https://doi.org/10.1016/j.knosys.2018.07.041

Pypi.org. Pyttsx3 2.90 [Internet]. 2022 [cited 2023 Jan 18]. Available from: https://pypi.org/project/pyttsx3/

Blandon JS. Interfaz de voz humano-robot para controlar un brazo robótico UR3. Trabajo de Grado en Ingeniería Electrónica, Pontificia Universidad Javeriana Cali; 2021.

Holguin JD. Algoritmo de fusión de señales de audio y vídeo para el manejo de un UR3. Trabajo de Grado en Ingeniería Electrónica, Pontificia Universidad Javeriana Cali; 2021.

Mosquera-DeLaCruz J-H, Martínez-Álvarez A, Nope-Rodríguez S-E, Loaiza-Correa H, Rodríguez-Téllez G-A, Jamioy-Cabrera J-D, et al. UR3 Multimodal Interaction Color Classification [Internet]. 2023 [cited 2023 Aug 10]. Available from: https://github.com/nandostiwar/UR3_Multimodal_Interaction_Color_Classification

SimplyPsicology. Likert Scale [Internet]. 2023 [cited 2023 Aug 10]. Available from: www.simplypsychology.org/likert-scale.html

Recibido 2023-08-14
Aceptado 2023-08-17
Publicado 2023-11-29