Next Article in Journal
Numerical Simulation and Development of a Continuous Microwave-Assisted Pilot Plant for Shelled Almond Processing
Previous Article in Journal
Fault Detection and Normal Operating Condition in Power Transformers via Pattern Recognition Artificial Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Implementation of Adam: A Humanoid Robotic Head with Social Interaction Capabilities

College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2024, 7(3), 42; https://doi.org/10.3390/asi7030042
Submission received: 3 April 2024 / Revised: 15 May 2024 / Accepted: 16 May 2024 / Published: 27 May 2024
(This article belongs to the Section Human-Computer Interaction)

Abstract

:
Social robots are being conceived with different characteristics and being used in different applications. The growth of social robotics benefits from advances in fabrication, sensing, and actuation technologies, as well as signal processing and artificial intelligence. This paper presents a design and implementation of the humanoid robotic platform Adam, consisting of a motorized human-like head with precise movements of the eyes, jaw, and neck, together with capabilities of face tracking and vocal conversation using ChatGPT. Adam relies on 3D-printed parts together with a microphone, a camera, and proper servomotors, and it has high structural integrity and flexibility. Adam’s control framework consists of an adequate signal exploitation and motor command strategy that allows efficient social interactions. Adam is an innovative platform that combines manufacturability, user-friendliness, low costs, acceptability, and sustainability, offering advantages compared with other platforms. Indeed, the platform’s hardware and software components are adjustable and allow it to increase its abilities and adapt them to different applications in a variety of roles. Future work will entail the development of a body for Adam and the addition of skin-like materials to enhance its human-like appearance.

1. Introduction

Different shapes, sizes, and capabilities have been associated with robotic platforms used in social interactions. In a large part of the conducted research, robots have been humanoid [1,2,3,4,5] but other works used animal- or toy-like robots [6,7,8] or robots that are not meant to look like human-familiar shapes [9,10]. A wide array of applications has emerged [11,12]; these applications include telepresence [13,14,15], education [16,17,18,19], care, assistance [20,21], reception [22], and children’s companionship [23,24]. The use of robots in these social contexts has benefited enormously from advancements in research across several domains, such as computer vision [25,26], natural language processing, conversational systems [21,27], and expression and gesture recognition and generation [28,29].
From a hardware point of view, additive manufacturing has facilitated the design, implementation, and modification of robotic platforms due to the flexibility and customizability that it enables [30]. In addition, the advances in actuator manufacturing have allowed us to perform precise actions in social robots like displaying biologically inspired motions using motors of different types and muscle-like actuators [31,32].
In this paper, the design and implementation of the Adam platform are detailed. This platform consists of a humanoid robotic head that can perform human head-like motions and display expressions due to its 7 degrees of freedom. This platform is also equipped with sound acquisition, processing, and generation capabilities, used in conjunction with ChatGPT to engage in conversations. This 3D-printed platform has been designed to be used in social interaction contexts, like in education and visitor reception. It is easily reproducible and modifiable, allowing for improvements in design or the addition of capabilities. The design is planned for initial mounting on a human-like upper body, with the aim of subsequently adding other body parts to develop a multipurpose modular humanoid robot.
This paper is organized as follows. Section 2 shows recent work that has been conducted in the domain of human–humanoid robot interaction. Section 3 shows the mechanical, aspects of the robot design and implementation. Section 4 shows the electronics and interaction control of the robot. Section 5 concludes the paper and shows work that will follow.

2. Related Work

Humanoid robot designs have previously been proposed with different aims. While some were general-purpose open-source designs made for 3D printing, others were made for specific purposes and their designs were not made publicly available for reproduction. Table 1 shows the characteristics of some of the robots mentioned below.
Among the open-source designs, “InMoov” is a 3D-printed robot that is easily replicable [33]. It is a life-size robot with a humanoid design and a large number of degrees of freedom, allowing it to have human-like motions. It has been used, for example, in research on reception and direction-giving [34], spatial perception and object identification [39], and real-time human motion imitation [40]. Another humanoid robot, “imNEU”, was based on InMoov and had features like a differential drive mobile platform [41].
“Roboquin” was presented in [36] as a humanoid robot with human-like body proportions and degrees of freedom, allowing it to emulate human movements. This robot was capable of performing nonverbal communication through gestures. It did not have facial components; it expressed states like sadness, anger, and fear by controlling the direction of the head and the posture of the arms and hands. The last aspect was addressed in [37] where the “Berrick” robotic head platform was presented. It was made based on the InMoov head and equipped to perform facial detection and gazing tasks.
Other robotic head platforms worth mentioning include Alan and Alena [38], which are highly customizable and designed to utilize technologies related to sound and image-processing tasks. Another notable platform is Furhat [42,43], equipped with a 3D display that allows it to show facial movements and expressions. It can both receive and emit sound, enabling it to engage in conversations. Kismet is a robot head with a cartoonish face that can engage in social interactions [44]. Also, Fritz was shown in [45] as a humanoid robot with degrees of freedom in the face and the arms that allowed it to interact with people using modalities like speech, facial expressions, and gestures. Additionally, “Han” is a humanoid robot capable of recognizing and interacting with people. It uses cameras and voice recognition technology. It can perform complex facial expressions and has about 40 motors to control its artificial facial muscles. The face is covered in a soft, flesh-like rubber allowing it to move in a human-like way [46,47,48].
From the above, it can be seen that increasing the number of degrees of freedom, as well as equipping robotic platforms with verbal communication abilities based on strong language models can enhance their interactions with humans and, thus, their acceptability. Adam’s design aims to address these considerations in both hardware and software aspects. Manufactured with 3D printing and using commercial control and actuation devices, with software components having ChatGPT at the center, the implementation presented in this paper features human-likeness with a high degree of robustness, flexibility, and user-friendliness.

3. Mechanical Design and Implementation

The humanoid robotic platform Adam is designed to mimic human-like head motion and it has seven degrees of freedom. The outer head consists of only three 3D-printed shells: the jaw, the frontal face, and the back head. These shells are obtained by splitting a complete humanoid head model without any simplification. The platform has been designed with this low number of shells as it improves its convenience in terms of manufacturability and assembly while ensuring the required degrees of freedom and allowing the platform to have human-like movements. This aspect is important as it helps in ensuring the acceptability of the platform when interacting with human users. The robot head is equipped with a camera for face tracking and utilizes a total of nine servo motors for eye, jaw, and neck movements. The hardware setup involves constructing the robot head structure, connecting the servo motors to a controller board, and interfacing the controller board with a Raspberry Pi. The software setup depends on both the controller board and the Raspberry Pi and includes integrating speech recognition and synthesis modules, as well as establishing a connection to ChatGPT, implementing face tracking algorithms, and controlling the servomotors [49].
Figure 1 and Figure 2 show Adam’s structural design, with its different layers, as well as its dimensions. The internal support structure consists of two identical and interconnected 3D-printed plates and all other parts are connected to this structure. The different hardware and software aspects of this platform will be presented in the following parts.

3.1. Mechanism

Adam incorporates mechanisms that enable precise and lifelike movements. The mechanisms use a total of nine servo motors, as will be detailed. These servomotors are precisely controlled by a controller board in coordination with Raspberry Pi. The mechanisms also rely on 3D-printed parts designed to ensure optimal fit and functionality. These parts provide the necessary structural integrity and flexibility required for the animatronic movements of the robot head. To create lifelike movements, Adam has a total of seven degrees of freedom. The eyes, with their expressiveness, possess two degrees of freedom each, allowing them to pan and tilt, providing a wide range of gaze directions and conveying a sense of focus and attention. The jaw, with its single degree of freedom, can articulate in a natural opening and closing motion, enabling the head to simulate speech through subtle mouth movements. The neck, with its two degrees of freedom, allows for fluid rotation and tilting, granting the head the ability to turn and nod realistically, enhancing its interaction with the environment. Through the combination of these precise and coordinated movements, along with the capacity to engage in vocal conversations, the animatronic robot head Adam achieves an uncanny level of realism, captivating audiences with its interactivity and engagement, its lifelike expressions, and seamless integration of motion.

3.2. Eye Movements

As shown in Figure 3, Adam’s eye movements are controlled by four MG90S (available: https://www.towerpro.com.tw/product/mg90s-3/, accessed on 10 March 2024) servo motors, with two dedicated to each eye. One servo motor controls the vertical movement of the eye, allowing it to be oriented up and down, while the other servo motor controls the horizontal movement, enabling the eye to turn left and right. These two degrees of freedom for each eye are independently controlled and provide them with the ability to simulate natural eye movement. This intricate mechanism allows the robot head to establish eye contact, convey emotions through subtle eye gestures, and focus its attention on individuals. The dynamic and realistic movements of the eyes contribute significantly to the humanoid robotic head Adam’s ability to engage and captivate its audience, making it an impressive and lifelike creation. The eye mechanisms were designed using 3D modeling software, and the corresponding parts were 3D-printed to ensure precise alignment and smooth movements. The 3D-printed components provide the necessary range of motion and eye stability to accurately track and engage with the user.

3.3. Jaw Movements

Adam’s jaw, as shown in Figure 4, is equipped with two HK15298 (available: https://hobbyking.com/en_us/hobbykingtm-hk15298-high-voltage-coreless-digital-servo-mg-bb-15kg-0-11sec-66g.html, accessed on 10 March 2024) servo motors responsible for its up and down movements in one degree of freedom. These servo motors control the opening and closing of the jaw, allowing the robot head to simulate speech and produce realistic movements in synchronization with the audio output while delivering oral responses while interacting with humans. The jaw mechanism relies on 3D-printed components, ensuring its flexibility and durability. When it is answering questions, the articulating jaw brings an added layer of realism to the animatronic robot head, making it engaging and relatable to observers.

3.4. Neck Movements

Adam’s neck comprises three HS-805BB (available: https://hitecrcd.com/products/servos/analog/giant-analog/hs-805bb/product, accessed on 10 March 2024) servo motors that enable two degrees of freedom, as shown in Figure 5. One servo motor controls the rotation of the neck in the horizontal plane, allowing it to turn right and left. The other two servo motors are responsible for the vertical movement of the neck, enabling it to tilt up and down. The neck mechanism relies on 3D-printed components that provide the necessary strength and precision to ensure smooth and realistic neck movements that mimic human neck movements. The neck has two degrees of freedom and is designed to have a pitch interval of 60 and a yaw interval of 100 as depicted in Figure 6.

3.5. Kinematics and Stress Analysis

As shown above, three independent mechanisms control the head movement: the eye mechanism, the jaw mechanism, and the neck mechanism, all of which are adaptations of a four-bar linkage. The head weighs 1.05 kg and is solely supported by the neck and, hence, is the most crucial structural component. This section provides the kinematic and structural analysis of the neck mechanism.
The pitching motion is driven by two synchronized servo motors actuating a set of parallel four-bar linkages with identical input, but moving in the opposite direction.
Figure 7 shows a simplified kinematic diagram of the linkages; O4B is the crank, which is actuated by the servo motor, AB is the coupler link, and O2A is the output link, which is a part of the internal frame of the head. The rotation of the output link results in the pitching motion of the head. The folded-in configuration of the crank-coupler is selected as it delivers a good overall transmission angle and keeps the neck compact. A kinematic motion study is performed using the SolidWorks motion analysis package by simulating the crank rotation for one complete cycle of operation. The transmission angle is the angle between the coupler and the rocker and it is an important parameter indicating the efficiency of force transfer between the links. The motion analysis results show that the transmission angle varies from 47 to 95 , as shown in Figure 8, ensuring an efficient and smooth motion.
The head’s center of mass is offset from the neck pivot by a radial distance of 70 mm and can create unbalanced forces of varying magnitudes depending on the angle of the head. This off-balance mass also results in inertial forces, especially when there is a sudden change in direction as in a nodding motion. A dynamic force analysis is, hence, found suitable for capturing the internal forces in the system. The nodding motion is simulated in SolidWorks by applying two virtual motors as the servomotor with a pitching angle of 60 to match the range of motion of the actual robot. These motors are then set to oscillate at 1 Hz around the horizontal plane to simulate the nodding motion. The results show a peak motor torque of 325 N-mm (see Figure 9). This occurs when the head changes direction from a downward to an upward motion. At this position, the center of gravity of the head is furthest from the neck pivot, which creates a high unbalanced force. The structural analysis presented in the following section is performed in this critical position with the highest internal forces. The two notches observed in the chart around 0.2 s and 0.8 s correspond to the position where the head is perfectly upright and the center of mass is vertically above the neck pivot. At this position, the head is perfectly balanced and, hence, very little motor torque is required to support the head.
Stress analysis performed on the important load-bearing members of the neck is depicted in Figure 10. The dynamic forces applied to each of the members are obtained directly from the SolidWorks motion study. These are applied as remote loads in the simulation and are shown in the figure using pink lines; the region of maximum stress is highlighted and the corresponding values are displayed.
All the analyzed critical components show stresses below the yield strength and the corresponding values are recorded in Table 2.

4. Interaction and Control

In terms of electronics, Adam involves integrating various components to facilitate communication, control, and power distribution. Control is performed with a Raspberry Pi 4, which interfaces with an EZ-B robot controller board (available: https://www.ez-robot.com/store/p24/EZB-smart-robot-controller.html, accessed on 10 March 2024), EZ-B robotics camera (available: https://www.ez-robot.com/store/p64/robotics-camera.html, accessed on 10 March 2024), a Turtle Beach USB microphone (available: https://uk.turtlebeach.com/pages/stream-mic, accessed on 10 March 2024), and a loudspeaker. These components work concurrently to ensure proper interaction capabilities for the robot. The robot controller board controls actuators in conjunction with Raspberry Pi. The camera module captures real-time video input for face tracking and recognition. The USB microphone captures user speech input, which is converted to text for prompting ChatGPT. Adam’s image-processing component utilizes the EZ-B robotics camera module and employs algorithms for detecting and tracking faces through the controller board.

4.1. Visual Tracking

The algorithm used for the visual tracking consists of a face detection module that provides the face location in the image. This location is then used to control the servomotors in order to keep the face in the center of the image. For instance, if the person is detected to be in the upper left side of the picture, the neck servomotors are controlled to rotate the head up and to the left. A visual scene captured by Adam with face detection is illustrated in Figure 11.

4.2. Conversational System

Adam’s conversational ability was centered around ChatGPT, and achieved in several stages, as illustrated in Figure 12, as follows:
  • Sound acquisition: the utterance of the user interacting with Adam is recorded through a microphone connected to the Raspberry Pi controlling it.
  • Speech recognition: the recorded speech signal is exploited by a Python speech recognition library (available: https://pypi.org/project/SpeechRecognition/, accessed on 10 March 2024) that returns the sequence of uttered words.
  • Prompt and response: the sequence of uttered words is sent as a prompt to ChatGPT through the OpenAI Python library (available: https://pypi.org/project/pyttsx3/, accessed on 10 March 2024) that returns the response as a sequence of words. The prompt also contains the previous parts of the conversation, to be taken into account in the returned answer. Also, the answer is set to be not too long for the conversation to remain lively from both sides.
  • Speech synthesis and playing: the sequence of words returned from ChatGPT is synthesized into a sound signal with a Python text-to-speech conversion library (available: https://platform.openai.com/docs/libraries, accessed on 10 March 2024) and played as a sound signal through a loudspeaker connected to the used Raspberry Pi. In conjunction with the loudspeaker sound emission, the jaw is activated in order to repeatedly move up and down. The jaw returns to its initial position and stops when the speech utterance is completed.
Compared with existing sound-based conversational systems (see [50,51] for example), Adam features a human-like animated embodiment that enriches the interaction with humans and makes it more accepted. Additionally, the current software structure of the conversational system makes it easily configurable and modifiable and allows it to benefit from emerging libraries and toolboxes that may improve its performance.

Conversational System Evaluation

Adam’s conversational abilities were tested in different environments and with ten people of both genders with different age groups asking Adam questions. In each case, the interacting person was in front of Adam, looking at it while Adam’s head was oriented toward the person. While that was not necessary, a screen displayed information signaling, such as the times when the robot acquired sounds from the user. As stated above, Adam’s conversational interaction relies on libraries requiring an internet connection, especially the OpenAI Python library. The speed of the connection and the reactiveness of the related servers can affect the time that the robot takes to answer a user’s utterance. However, the robot’s answers were fast enough not to affect the interactions negatively and the different persons did not report any issues related to this aspect. The testing involved asking each person five different questions, with the robot accurately answering between four and five of these per person. This demonstrates the efficiency of the speech recognition, ChatGPT connection, and speech synthesis modules, as well as their effective operation as a cohesive unit. The results of this test are reported in Figure 13.

5. Conclusions and Future Work

While a detailed statistical study on its acceptability and usefulness in specific roles is planned for future research, this paper presents the successful development and implementation of the humanoid robotic head, Adam.
Adam incorporates a mechanism with servo motors and a servo controller board to enable precise and lifelike movements of the eyes, jaw, and neck. The system includes a camera for face tracking and integrates with the ChatGPT server to generate responses. The performance of the system has been evaluated through user testing sessions, including the accuracy of responses and the ability to maintain eye contact with users. Adam has also been showcased at several events and was positively received by most people who interacted with it.
Adam can be used as an interactive research platform in human–robot interaction for areas like education and visitor reception. It incorporates 3D modeling and additive manufacturing, along with actuation and computing technologies, into an embodiment as a configurable humanoid platform. This ensures its structural integrity and flexibility with smooth and realistic movements. Compared with existing robotic platforms (characteristics are summarized in Table 1), Adam features a relatively high number of degrees of freedom. It has both vision and sound acquisition capabilities along with sound emission. Additionally, it boasts realistic expressiveness and a lifelike design, which are easy to manufacture and assemble, with a production cost not exceeding USD 650. Beyond its ability to track persons visually and engage in conversations, Adam’s programmability and multi-freedom extend its potential for more tasks and features.
Moving forward, future work for this project involves expanding the capabilities of Adam. The next phase will focus on developing a body to accompany the robot head, enabling a more complete humanoid appearance. Additionally, plans include incorporating skin-like materials to enhance the face, making it more human-like in terms of appearance and texture. This will improve Adam’s facial movements and positively impact their communication process with humans. Additionally, Adam’s interaction capabilities can be improved to take into account more than one person at once. In the presence of several persons, the robot needs to define its current interlocutor at each time. This can be done by detecting the speaking person through lip motion or estimating the direction of sound emission as a first step, followed by orienting the head toward this person. To this end, more signal processing hardware and software equipment can be added, and the Adam platform offers the flexibility to allow it.

Author Contributions

Conceptualization, S.S., K.Y., S.A. and B.P.; methodology, K.Y., S.S., S.A. and T.B.; software, K.Y. and S.S.; validation, S.S., S.A. and G.A.; formal analysis, K.Y.; investigation, B.P. and S.S.; resources, K.Y., S.S., S.A. and T.B.; data curation, B.P. and S.S.; writing—original draft preparation, G.A., K.Y. and S.S.; writing—review and editing, K.Y., S.S., S.A. and T.B.; visualization, K.Y. and B.P.; supervision, S.A. and T.B.; project administration, S.S. and T.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. NAO the Humanoid and Programmable Robot|Aldebaran. Available online: https://www.aldebaran.com/en/nao (accessed on 4 July 2023).
  2. Pepper the Humanoid and Programmable Robot|Aldebaran. Available online: https://www.aldebaran.com/en/pepper (accessed on 4 July 2023).
  3. ASIMO by Honda|The World’s Most Advanced Humanoid Robot. Available online: https://asimo.honda.com/ (accessed on 4 July 2023).
  4. Wood, L.J.; Zaraki, A.; Robins, B.; Dautenhahn, K. Developing Kaspar: A Humanoid Robot for Children with Autism. Int. J. Soc. Robot. 2019, 13, 491–508. [Google Scholar] [CrossRef] [PubMed]
  5. Karar, A.; Said, S.; Beyrouthy, T. Pepper humanoid robot as a service robot: A customer approach. In Proceedings of the 2019 3rd International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 24–26 April 2019; pp. 1–4. [Google Scholar]
  6. PARO Therapeutic Robot. Available online: http://www.parorobots.com/index.asp (accessed on 4 July 2023).
  7. Overview < Huggable: A Social Robot for Pediatric Care—MIT Media Lab. Available online: https://www.media.mit.edu/projects/huggable-a-social-robot-for-pediatric-care/overview/ (accessed on 4 July 2023).
  8. aibo. Available online: https://us.aibo.com/ (accessed on 10 October 2022).
  9. Luperto, M.; Monroy, J.; Renoux, J.; Lunardini, F.; Basilico, N.; Bulgheroni, M.; Cangelosi, A.; Cesari, M.; Cid, M.; Ianes, A.; et al. Integrating Social Assistive Robots, IoT, Virtual Communities and Smart Objects to Assist at-Home Independently Living Elders: The MoveCare Project. Int. J. Soc. Robot. 2021, 15, 517–545. [Google Scholar] [CrossRef]
  10. Double Robotics—Telepresence Robot for the Hybrid Office. Available online: https://www.doublerobotics.com/ (accessed on 4 July 2023).
  11. Youssef, K.; Said, S.; Alkork, S.; Beyrouthy, T. A Survey on Recent Advances in Social Robotics. Robotics 2022, 11, 75. [Google Scholar] [CrossRef]
  12. Mahdi, H.; Akgun, S.A.; Saleh, S.; Dautenhahn, K. A survey on the design and evolution of social robots—Past, present and future. Robot. Auton. Syst. 2022, 156, 104193. [Google Scholar] [CrossRef]
  13. Shiarlis, K.; Messias, J.; van Someren, M.; Whiteson, S.; Kim, J.; Vroon, J.; Englebienne, G.; Truong, K.; Evers, V.; Pérez-Higueras, N.; et al. TERESA: A Socially Intelligent SEmi-autonomous Telepresence System. In Proceedings of the International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
  14. Tuli, T.B.; Terefe, T.O.; Ur Rashid, M.M. Telepresence Mobile Robots Design and Control for Social Interaction. Int. J. Soc. Robot. 2020, 13, 877–886. [Google Scholar] [CrossRef] [PubMed]
  15. Youssef, K.; Said, S.; Al Kork, S.; Beyrouthy, T. Telepresence in the Recent Literature with a Focus on Robotic Platforms, Applications and Challenges. Robotics 2023, 12, 111. [Google Scholar] [CrossRef]
  16. Mubin, O.; Alhashmi, M.; Baroud, R.; Alnajjar, F.S. Humanoid Robots as Teaching Asistants in an Arab School. In Proceedings of the 31st Australian Conference on Human-Computer Interaction, Fremantle, WA, Australia, 2–5 December 2019. [Google Scholar]
  17. Mispa, T.A.; Sojib, N. Educational Robot Kiddo Learns to Draw to Enhance Interactive Handwriting Scenario for Primary School Children. In Proceedings of the 3rd Intrernational Conference of Intelligent Robotic and Control Engineering (IRCE), Fremantle, WA, Australia, 2–5 December 2019. [Google Scholar]
  18. Schodde, T.; Bergmann, K.; Kopp, S. Adaptive Robot Language Tutoring Based on Bayesian Knowledge Tracing and Predictive Decision-Making. In Proceedings of the 12th ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017. [Google Scholar]
  19. Youssef, K.; Said, S.; Al Kork, S.; Beyrouthy, T. Social Robotics in Education A Survey on Recent Studies and Applications. Int. J. Emerg. Technol. Learn. (iJET) 2023, 18, 67. [Google Scholar] [CrossRef]
  20. Frennert, S.; Aminoff, H.; Ostlund, B. Technological Framces and Care Robots in Eldercare. Int. J. Soc. Robot. 2020, 13, 311–325. [Google Scholar] [CrossRef]
  21. Obayashi, K.; Kodate, N.; Masuyama, S. Assessing the Impact of an Original Soft Communicative Robot in a Nursing Home in Japan: Will Softness or Conversations Bring more Smiles to Older People? Int. J. Soc. Robot. 2021, 14, 645–665. [Google Scholar] [CrossRef]
  22. Youssef, K.; Said, S.; Beyrouthy, T.; Alkork, S. A Social Robot with Conversational Capabilities for Visitor Reception: Design and Framework. In Proceedings of the 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 24–26 April 2019; pp. 1–4. [Google Scholar] [CrossRef]
  23. Castellano, G.; De Carolis, B.; D’Errico, F.; Macchiarulo, N.; Rossano, V. PeppeRecycle: Improving Children’s Attitude Toward Recycling by Playing with a Social Robot. Int. J. Soc. Robot. 2021, 13, 97–111. [Google Scholar] [CrossRef]
  24. Filippini, C.; Spadolini, E.; Cardone, D.; Bianchi, D.; Preziuso, M.; Sciarretta, C.; del Cimmuto, V.; Lisciani, D.; Merla, A. Facilitating the Child-Robot Interaction by Endowing the Robot with the Capability of Understanding the Child Engagement: The Case of Mio Amico Robot. Int. J. Soc. Robot. 2020, 13, 677–689. [Google Scholar] [CrossRef]
  25. Garcia-Salguero, M.; Gonzalez-Jimenez, J.; Moreno, F.A. Human 3D Pose Estimation with a Tilting Camera for Social Mobile Robot Interaction. Sensors 2019, 19, 4943. [Google Scholar] [CrossRef] [PubMed]
  26. Pathi, S.K.; Kiselev, A.; Kristoffersson, A.; Repsilber, D.; Loutfi, A. A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera. Sensors 2019, 19, 3142. [Google Scholar] [CrossRef] [PubMed]
  27. Ismail, L.I.; Hanapiah, F.A.; Belpaeme, T.; Dambre, J.; Wyffels, F. Analysis of Attention in Child-Robot Interaction Among Children Diagnosed with Cognitive Impairement. Int. J. Soc. Robot. 2020, 13, 141–152. [Google Scholar] [CrossRef]
  28. Striepe, H.; Donnermann, M.; Lein, M.; Lugrin, B. Modeling and Evaluating Emotion, Contextual Head Movement and Voices for a Social Robot Storyteller. Int. J. Soc. Robot. 2019, 13, 441–457. [Google Scholar] [CrossRef]
  29. Faraj, Z.; Selamet, M.; Morales, C.; Torres, P.; Hossain, M.; Chen, B.; Lipson, H. Facially expressive humanoid robotic face. HardwareX 2021, 9, e00117. [Google Scholar] [CrossRef] [PubMed]
  30. Romeo, J. Why Additive Manufacturing and 3D Printing Benefits Robot Creators. Robotics Business Review. 2019. Available online: https://www.roboticsbusinessreview.com/wp-content/uploads/2019/04/RBR-AdditiveManufacturing-RobotCreators-Final.pdf (accessed on 10 March 2024).
  31. Zhang, J.; Sheng, J.; O’Neill, C.T.; Walsh, C.J.; Wood, R.J.; Ryu, J.H.; Desai, J.P.; Yip, M.C. Robotic Artificial Muscles: Current Progress and Future Perspectives. IEEE Trans. Robot. 2019, 35, 761–781. [Google Scholar] [CrossRef]
  32. Craddock, M.; Augustine, E.; Konerman, S.; Shin, M. Biorobotics: An Overview of Recent Innovations in Artificial Muscles. Actuators 2022, 11, 168. [Google Scholar] [CrossRef]
  33. InMoov—Open-Source 3D Printed Life-Size Robot. Available online: https://inmoov.fr/ (accessed on 15 November 2022).
  34. Bazzano, F.; Lamberti, F. Human-Robot Interfaces for Interactive Receptionist Systems and Wayfinding Applications. Robotics 2018, 7, 56. [Google Scholar] [CrossRef]
  35. Eye Mechanism—InMoov. Available online: https://inmoov.fr/eye-mechanism/ (accessed on 22 December 2022).
  36. Shidujaman, M.; Zhang, S.; Elder, R.; Mi, H. “Roboquin”: A Mannequin Robot with Natural Humanoid Movements. In Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018. [Google Scholar]
  37. Berra, R.; Setti, F.; Cristani, M. Berrick: A low-cost robotic head platform for human-robot interaction. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 559–566. [Google Scholar] [CrossRef]
  38. Alan|Get Ahead in Robotics. Available online: http://getaheadinrobotics.com/ (accessed on 21 November 2022).
  39. Felipe, C.Z.J.; Alejandra, G.T.Y.; Vergara Ramirez, C.F. Convolutional Neural Networks for Spatial Perception of InMoov Robot Through Stereoscopic Vision and an Assistive Technology. Enfoque UTE 2021, 12, 88–104. [Google Scholar]
  40. Gong, L.; Chen, B.; Xu, W.; Liu, C.; Li, X.; Zhao, Z.; Zhao, L. Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model. Sensors 2022, 22, 1968. [Google Scholar] [CrossRef] [PubMed]
  41. Cheng, H.; Ji, G. Design and implementation of a low cost 3D printed humanoid robotic platform. In Proceedings of the 2016 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Chengdu, China, 19–22 June 2016; pp. 86–91. [Google Scholar] [CrossRef]
  42. The Furhat Robot|Furhat Robotics. Available online: https://furhatrobotics.com/furhat-robot/ (accessed on 25 January 2022).
  43. Al Moubayed, S.; Beskow, J.; Skantze, G.; Granstrom, B. Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction; Cognitive Behavioural Systems. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  44. Kismet—ROBOTS: Your Guide to the World of Robotics. Available online: https://robots.ieee.org/robots/kismet/ (accessed on 5 December 2022).
  45. Bennewitz, M.; Faber, F.; Joho, D.; Behnke, S. Fritz—A Humanoid Communication Robot. In Proceedings of the 16th IEEE International Conference on Robot & Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007. [Google Scholar]
  46. Han is a Spookily Realistic Humanoid Robot|WIRED UK. Available online: https://www.wired.co.uk/article/han-realistic-humanoid-robot-video (accessed on 10 October 2022).
  47. Humanoid Robot can Recognize and Interact with People|Reuters. Available online: https://www.reuters.com/article/us-china-humanoid-robot-idUKKBN0NB21V20150420 (accessed on 10 October 2022).
  48. Han—Hanson Robotics. Available online: https://www.hansonrobotics.com/han/ (accessed on 10 October 2022).
  49. Said, S.; AlAsfour, G.; Alghannam, F.; Khalaf, S.; Susilo, T.; Prasad, B.; Youssef, K.; Alkork, S.; Beyrouthy, T. Experimental Investigation of an Interactive Animatronic Robotic Head Connected to ChatGPT. In Proceedings of the 2023 5th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 7–9 June 2023; pp. 1–4. [Google Scholar] [CrossRef]
  50. ChatGPT Voice: How to Access the Free AI Voice Chat Feature—Cloudbooklet. Available online: https://www.cloudbooklet.com/ai-audio/how-to-use-chatgpt-voice-chat (accessed on 8 May 2024).
  51. Voice Chatbot—IBM Watsonx Assistant. Available online: https://www.ibm.com/products/watsonx-assistant/voice (accessed on 8 May 2024).
Figure 1. The humanoid robotic head; Adam’s structural design.
Figure 1. The humanoid robotic head; Adam’s structural design.
Asi 07 00042 g001
Figure 2. Dimensions of the Adam robotics head. All the measures are in millimeters.
Figure 2. Dimensions of the Adam robotics head. All the measures are in millimeters.
Asi 07 00042 g002
Figure 3. Eye movement mechanism; 3D design.
Figure 3. Eye movement mechanism; 3D design.
Asi 07 00042 g003
Figure 4. Jaw movement mechanisms; 3D Design.
Figure 4. Jaw movement mechanisms; 3D Design.
Asi 07 00042 g004
Figure 5. Neck movement mechanisms; 3D design.
Figure 5. Neck movement mechanisms; 3D design.
Asi 07 00042 g005
Figure 6. (i,ii) Pitch and (iii,iv) yaw movements.
Figure 6. (i,ii) Pitch and (iii,iv) yaw movements.
Asi 07 00042 g006
Figure 7. (a) Side view of the neck pitching mechanism with the kinematic chain highlighted in orange; (b) kinematic diagram of the pitching mechanism showing the transmission angle γ .
Figure 7. (a) Side view of the neck pitching mechanism with the kinematic chain highlighted in orange; (b) kinematic diagram of the pitching mechanism showing the transmission angle γ .
Asi 07 00042 g007
Figure 8. Transmission angle.
Figure 8. Transmission angle.
Asi 07 00042 g008
Figure 9. Motor torque for pitching.
Figure 9. Motor torque for pitching.
Asi 07 00042 g009
Figure 10. FEA von Mises stresses in the important neck components.
Figure 10. FEA von Mises stresses in the important neck components.
Asi 07 00042 g010
Figure 11. Detection of the face of a person interacting with Adam. Note that the face is blurred in the current illustration. Demonstration video of the ADAM operation (https://youtu.be/6w9tZgyRsAs?si=9OjM9k1w_Xy-wXd_ accessed on 10 March 2024).
Figure 11. Detection of the face of a person interacting with Adam. Note that the face is blurred in the current illustration. Demonstration video of the ADAM operation (https://youtu.be/6w9tZgyRsAs?si=9OjM9k1w_Xy-wXd_ accessed on 10 March 2024).
Asi 07 00042 g011
Figure 12. Consecutive steps in the conversational block of Adam.
Figure 12. Consecutive steps in the conversational block of Adam.
Asi 07 00042 g012
Figure 13. Number of accurate answers provided by Adam to the 5 questions asked by each interlocutor among 10.
Figure 13. Number of accurate answers provided by Adam to the 5 questions asked by each interlocutor among 10.
Asi 07 00042 g013
Table 1. Some characteristics of humanoid robots surveyed in Section 2.
Table 1. Some characteristics of humanoid robots surveyed in Section 2.
RobotHead DoFVisionSound Acquisition3D-PrintedOpen-Source Design
InMoov [33]5 [34]Cameras in the
eye locations [35]
Customizable, example: external microphone [34]YesYes
Roboquin [36]3UnspecifiedThrough a PC computerUnspecifiedUnspecified
Berrick [37]5Monocular camera in the left eyeN/AYesYes
Alan and Alena [38]61 camera in
the forehead
Yes, unspecified number of microphonesUnspecifiedNo [37]
Table 2. Finite element analysis results for the critical components of Adam.
Table 2. Finite element analysis results for the critical components of Adam.
PartNameMaterialYield Strength (MPa)Maximum Stress (MPa)
a.Neck columnAluminum 606127546.84
b.Servo basePLA3021.76
c.Neck pivotPLA305.11
d.Servo armAluminum 707550582.05
e.Head basePLA3011.79
f.Internal frame (left)PLA301.87
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Said, S.; Youssef, K.; Prasad, B.; Alasfour, G.; Alkork, S.; Beyrouthy, T. Design and Implementation of Adam: A Humanoid Robotic Head with Social Interaction Capabilities. Appl. Syst. Innov. 2024, 7, 42. https://doi.org/10.3390/asi7030042

AMA Style

Said S, Youssef K, Prasad B, Alasfour G, Alkork S, Beyrouthy T. Design and Implementation of Adam: A Humanoid Robotic Head with Social Interaction Capabilities. Applied System Innovation. 2024; 7(3):42. https://doi.org/10.3390/asi7030042

Chicago/Turabian Style

Said, Sherif, Karim Youssef, Benrose Prasad, Ghaneemah Alasfour, Samer Alkork, and Taha Beyrouthy. 2024. "Design and Implementation of Adam: A Humanoid Robotic Head with Social Interaction Capabilities" Applied System Innovation 7, no. 3: 42. https://doi.org/10.3390/asi7030042

Article Metrics

Back to TopTop -