Technologies Latest open access articles published in Technologies at https://www.mdpi.com/journal/technologies https://www.mdpi.com/journal/technologies MDPI en Creative Commons Attribution (CC-BY) MDPI support@mdpi.com
  • Technologies, Vol. 12, Pages 97: Tongue Disease Prediction Based on Machine Learning Algorithms https://www.mdpi.com/2227-7080/12/7/97 The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels of progression of the ailment. With the development of computer vision systems, especially in the field of artificial intelligence, there has been important progress in acquiring, processing, and classifying tongue images. This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%. Based on these obtained results, the XGBoost algorithm was chosen as the classifier of the proposed imaging system and linked with a graphical user interface to predict tongue color and its related diseases in real time. Thus, this proposed imaging system opens the door for expanded tongue diagnosis within future point-of-care health systems. 2024-06-28 Technologies, Vol. 12, Pages 97: Tongue Disease Prediction Based on Machine Learning Algorithms

    Technologies doi: 10.3390/technologies12070097

    Authors: Hassoon Al-Naji Khalid Chahl

    The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels of progression of the ailment. With the development of computer vision systems, especially in the field of artificial intelligence, there has been important progress in acquiring, processing, and classifying tongue images. This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%. Based on these obtained results, the XGBoost algorithm was chosen as the classifier of the proposed imaging system and linked with a graphical user interface to predict tongue color and its related diseases in real time. Thus, this proposed imaging system opens the door for expanded tongue diagnosis within future point-of-care health systems.

    ]]>
    Tongue Disease Prediction Based on Machine Learning Algorithms Hassoon Al-Naji Khalid Chahl doi: 10.3390/technologies12070097 Technologies 2024-06-28 Technologies 2024-06-28 12 7
    Article
    97 10.3390/technologies12070097 https://www.mdpi.com/2227-7080/12/7/97
    Technologies, Vol. 12, Pages 96: Deep Learning for Skeleton-Based Human Activity Segmentation: An Autoencoder Approach https://www.mdpi.com/2227-7080/12/7/96 Automatic segmentation is essential for enhancing human activity recognition, especially given the limitations of publicly available datasets that often lack diversity in daily activities. This study introduces a novel segmentation method that utilizes skeleton data for a more accurate and efficient analysis of human actions. By employing an autoencoder, this method extracts representative features and reconstructs the dataset, using the discrepancies between the original and reconstructed data to establish a segmentation threshold. This innovative approach allows for the automatic segmentation of activity datasets into distinct segments. Rigorous evaluations against ground truth across three publicly available datasets demonstrate the method’s effectiveness, achieving impressive average annotation error, precision, recall, and F1-score values of 3.6, 90%, 87%, and 88%, respectively. This illustrates the robustness of the proposed method in accurately identifying change points and segmenting continuous skeleton-based activities as compared to two other state-of-the-art techniques: one based on deep learning and another using the classical time-series segmentation algorithm. Additionally, the dynamic thresholding mechanism enhances the adaptability of the segmentation process to different activity dynamics improving overall segmentation accuracy. This performance highlights the potential of the proposed method to significantly advance the field of human activity recognition by improving the accuracy and efficiency of identifying and categorizing human movements. 2024-06-27 Technologies, Vol. 12, Pages 96: Deep Learning for Skeleton-Based Human Activity Segmentation: An Autoencoder Approach

    Technologies doi: 10.3390/technologies12070096

    Authors: Md Amran Hossen Abdul Ghani Naim Pg Emeroylariffion Abas

    Automatic segmentation is essential for enhancing human activity recognition, especially given the limitations of publicly available datasets that often lack diversity in daily activities. This study introduces a novel segmentation method that utilizes skeleton data for a more accurate and efficient analysis of human actions. By employing an autoencoder, this method extracts representative features and reconstructs the dataset, using the discrepancies between the original and reconstructed data to establish a segmentation threshold. This innovative approach allows for the automatic segmentation of activity datasets into distinct segments. Rigorous evaluations against ground truth across three publicly available datasets demonstrate the method’s effectiveness, achieving impressive average annotation error, precision, recall, and F1-score values of 3.6, 90%, 87%, and 88%, respectively. This illustrates the robustness of the proposed method in accurately identifying change points and segmenting continuous skeleton-based activities as compared to two other state-of-the-art techniques: one based on deep learning and another using the classical time-series segmentation algorithm. Additionally, the dynamic thresholding mechanism enhances the adaptability of the segmentation process to different activity dynamics improving overall segmentation accuracy. This performance highlights the potential of the proposed method to significantly advance the field of human activity recognition by improving the accuracy and efficiency of identifying and categorizing human movements.

    ]]>
    Deep Learning for Skeleton-Based Human Activity Segmentation: An Autoencoder Approach Md Amran Hossen Abdul Ghani Naim Pg Emeroylariffion Abas doi: 10.3390/technologies12070096 Technologies 2024-06-27 Technologies 2024-06-27 12 7
    Article
    96 10.3390/technologies12070096 https://www.mdpi.com/2227-7080/12/7/96
    Technologies, Vol. 12, Pages 95: Integrating Artificial Intelligence to Biomedical Science: New Applications for Innovative Stem Cell Research and Drug Development https://www.mdpi.com/2227-7080/12/7/95 Artificial intelligence (AI) is rapidly advancing, aiming to mimic human cognitive abilities, and is addressing complex medical challenges in the field of biological science. Over the past decade, AI has experienced exponential growth and proven its effectiveness in processing massive datasets and optimizing decision-making. The main content of this review paper emphasizes the active utilization of AI in the field of stem cells. Stem cell therapies use diverse stem cells for drug development, disease modeling, and medical treatment research. However, cultivating and differentiating stem cells, along with demonstrating cell efficacy, require significant time and labor. In this review paper, convolutional neural networks (CNNs) are widely used to overcome these limitations by analyzing stem cell images, predicting cell types and differentiation efficiency, and enhancing therapeutic outcomes. In the biomedical sciences field, AI algorithms are used to automatically screen large compound databases, identify potential molecular structures and characteristics, and evaluate the efficacy and safety of candidate drugs for specific diseases. Also, AI aids in predicting disease occurrence by analyzing patients’ genetic data, medical images, and physiological signals, facilitating early diagnosis. The stem cell field also actively utilizes AI. Artificial intelligence has the potential to make significant advances in disease risk prediction, diagnosis, prognosis, and treatment and to reshape the future of healthcare. This review summarizes the applications and advancements of AI technology in fields such as drug development, regenerative medicine, and stem cell research. 2024-06-26 Technologies, Vol. 12, Pages 95: Integrating Artificial Intelligence to Biomedical Science: New Applications for Innovative Stem Cell Research and Drug Development

    Technologies doi: 10.3390/technologies12070095

    Authors: Minjae Kim Sunghoi Hong

    Artificial intelligence (AI) is rapidly advancing, aiming to mimic human cognitive abilities, and is addressing complex medical challenges in the field of biological science. Over the past decade, AI has experienced exponential growth and proven its effectiveness in processing massive datasets and optimizing decision-making. The main content of this review paper emphasizes the active utilization of AI in the field of stem cells. Stem cell therapies use diverse stem cells for drug development, disease modeling, and medical treatment research. However, cultivating and differentiating stem cells, along with demonstrating cell efficacy, require significant time and labor. In this review paper, convolutional neural networks (CNNs) are widely used to overcome these limitations by analyzing stem cell images, predicting cell types and differentiation efficiency, and enhancing therapeutic outcomes. In the biomedical sciences field, AI algorithms are used to automatically screen large compound databases, identify potential molecular structures and characteristics, and evaluate the efficacy and safety of candidate drugs for specific diseases. Also, AI aids in predicting disease occurrence by analyzing patients’ genetic data, medical images, and physiological signals, facilitating early diagnosis. The stem cell field also actively utilizes AI. Artificial intelligence has the potential to make significant advances in disease risk prediction, diagnosis, prognosis, and treatment and to reshape the future of healthcare. This review summarizes the applications and advancements of AI technology in fields such as drug development, regenerative medicine, and stem cell research.

    ]]>
    Integrating Artificial Intelligence to Biomedical Science: New Applications for Innovative Stem Cell Research and Drug Development Minjae Kim Sunghoi Hong doi: 10.3390/technologies12070095 Technologies 2024-06-26 Technologies 2024-06-26 12 7
    Review
    95 10.3390/technologies12070095 https://www.mdpi.com/2227-7080/12/7/95
    Technologies, Vol. 12, Pages 94: Transformer-Based Water Stress Estimation Using Leaf Wilting Computed from Leaf Images and Unsupervised Domain Adaptation for Tomato Crops https://www.mdpi.com/2227-7080/12/7/94 Modern agriculture faces the dual challenge of ensuring sustainability while meeting the growing global demand for food. Smart agriculture, which uses data from the environment and plants to deliver water exactly when and how it is needed, has attracted significant attention. This approach requires precise water management and highly accurate real-time monitoring of crop water stress. Existing monitoring methods pose challenges such as the risk of plant damage, costly sensors, and the need for expert adjustments. Therefore, a low-cost, highly accurate water stress estimation model was developed that uses deep learning and commercially available sensors. The model uses the relative stem diameter as a water stress index and incorporates data from environmental sensors and an RGB camera, which are processed by the proposed daily normalization. In addition, domain adaptation in our Transformer model was implemented to enable robust learning in different areas. The accuracy of the model was evaluated using real cultivation data from tomato crops, achieving a coefficient of determination (R2) of 0.79 in water stress estimation. Furthermore, the model maintained a high level of accuracy when applied to different areas, with an R2 of 0.76, demonstrating its high adaptability under different conditions. 2024-06-25 Technologies, Vol. 12, Pages 94: Transformer-Based Water Stress Estimation Using Leaf Wilting Computed from Leaf Images and Unsupervised Domain Adaptation for Tomato Crops

    Technologies doi: 10.3390/technologies12070094

    Authors: Makoto Koike Riku Onuma Ryo Adachi Hiroshi Mineno

    Modern agriculture faces the dual challenge of ensuring sustainability while meeting the growing global demand for food. Smart agriculture, which uses data from the environment and plants to deliver water exactly when and how it is needed, has attracted significant attention. This approach requires precise water management and highly accurate real-time monitoring of crop water stress. Existing monitoring methods pose challenges such as the risk of plant damage, costly sensors, and the need for expert adjustments. Therefore, a low-cost, highly accurate water stress estimation model was developed that uses deep learning and commercially available sensors. The model uses the relative stem diameter as a water stress index and incorporates data from environmental sensors and an RGB camera, which are processed by the proposed daily normalization. In addition, domain adaptation in our Transformer model was implemented to enable robust learning in different areas. The accuracy of the model was evaluated using real cultivation data from tomato crops, achieving a coefficient of determination (R2) of 0.79 in water stress estimation. Furthermore, the model maintained a high level of accuracy when applied to different areas, with an R2 of 0.76, demonstrating its high adaptability under different conditions.

    ]]>
    Transformer-Based Water Stress Estimation Using Leaf Wilting Computed from Leaf Images and Unsupervised Domain Adaptation for Tomato Crops Makoto Koike Riku Onuma Ryo Adachi Hiroshi Mineno doi: 10.3390/technologies12070094 Technologies 2024-06-25 Technologies 2024-06-25 12 7
    Article
    94 10.3390/technologies12070094 https://www.mdpi.com/2227-7080/12/7/94
    Technologies, Vol. 12, Pages 93: Multi-Objective Optimisation of the Battery Box in a Racing Car https://www.mdpi.com/2227-7080/12/7/93 The optimisation of electric vehicle battery boxes while preserving their structural performance presents a formidable challenge. Many studies typically involve fewer than 10 design variables in their optimisation processes, a deviation from the reality of battery box design scenarios. The present study, for the first time, attempts to use sensitivity analysis to screen the design variables and achieve an efficient optimisation design with a large number of original design variables. Specifically, the sensitivity analysis method was proposed to screen a certain number of optimisation variables, reducing the computational complexity while ensuring the efficiency of the optimisation process. A combination of the Generalised Regression Neural Network (GRNN) and the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) was employed to construct surrogate models and solve the optimisation problem. The optimisation model integrates these techniques to balance structural performance and weight reduction. The optimisation results demonstrate a significant reduction in battery box weight while maintaining structural integrity. Therefore, the proposed approach in this study provides important insights for achieving high-efficiency multi-objective optimisation of battery box structures. 2024-06-25 Technologies, Vol. 12, Pages 93: Multi-Objective Optimisation of the Battery Box in a Racing Car

    Technologies doi: 10.3390/technologies12070093

    Authors: Chao Ma Caiqi Xu Mohammad Souri Elham Hosseinzadeh Masoud Jabbari

    The optimisation of electric vehicle battery boxes while preserving their structural performance presents a formidable challenge. Many studies typically involve fewer than 10 design variables in their optimisation processes, a deviation from the reality of battery box design scenarios. The present study, for the first time, attempts to use sensitivity analysis to screen the design variables and achieve an efficient optimisation design with a large number of original design variables. Specifically, the sensitivity analysis method was proposed to screen a certain number of optimisation variables, reducing the computational complexity while ensuring the efficiency of the optimisation process. A combination of the Generalised Regression Neural Network (GRNN) and the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) was employed to construct surrogate models and solve the optimisation problem. The optimisation model integrates these techniques to balance structural performance and weight reduction. The optimisation results demonstrate a significant reduction in battery box weight while maintaining structural integrity. Therefore, the proposed approach in this study provides important insights for achieving high-efficiency multi-objective optimisation of battery box structures.

    ]]>
    Multi-Objective Optimisation of the Battery Box in a Racing Car Chao Ma Caiqi Xu Mohammad Souri Elham Hosseinzadeh Masoud Jabbari doi: 10.3390/technologies12070093 Technologies 2024-06-25 Technologies 2024-06-25 12 7
    Article
    93 10.3390/technologies12070093 https://www.mdpi.com/2227-7080/12/7/93
    Technologies, Vol. 12, Pages 92: A Review of Automatic Pain Assessment from Facial Information Using Machine Learning https://www.mdpi.com/2227-7080/12/6/92 Pain assessment has become an important component in modern healthcare systems. It aids medical professionals in patient diagnosis and providing the appropriate care and therapy. Conventionally, patients are asked to provide their pain level verbally. However, this subjective method is generally inaccurate, not possible for non-communicative people, can be affected by physiological and environmental factors and is time-consuming, which renders it inefficient in healthcare settings. So, there has been a growing need to build objective, reliable and automatic pain assessment alternatives. In fact, due to the efficiency of facial expressions as pain biomarkers that accurately expand the pain intensity and the power of machine learning methods to effectively learn the subtle nuances of pain expressions and accurately predict pain intensity, automatic pain assessment methods have evolved rapidly. This paper reviews recent spatial facial expressions and machine learning-based pain assessment methods. Moreover, we highlight the pain intensity scales, datasets and method performance evaluation criteria. In addition, these methods’ contributions, strengths and limitations will be reported and discussed. Additionally, the review lays the groundwork for further study and improvement for more accurate automatic pain assessment. 2024-06-20 Technologies, Vol. 12, Pages 92: A Review of Automatic Pain Assessment from Facial Information Using Machine Learning

    Technologies doi: 10.3390/technologies12060092

    Authors: Najib Ben Aoun

    Pain assessment has become an important component in modern healthcare systems. It aids medical professionals in patient diagnosis and providing the appropriate care and therapy. Conventionally, patients are asked to provide their pain level verbally. However, this subjective method is generally inaccurate, not possible for non-communicative people, can be affected by physiological and environmental factors and is time-consuming, which renders it inefficient in healthcare settings. So, there has been a growing need to build objective, reliable and automatic pain assessment alternatives. In fact, due to the efficiency of facial expressions as pain biomarkers that accurately expand the pain intensity and the power of machine learning methods to effectively learn the subtle nuances of pain expressions and accurately predict pain intensity, automatic pain assessment methods have evolved rapidly. This paper reviews recent spatial facial expressions and machine learning-based pain assessment methods. Moreover, we highlight the pain intensity scales, datasets and method performance evaluation criteria. In addition, these methods’ contributions, strengths and limitations will be reported and discussed. Additionally, the review lays the groundwork for further study and improvement for more accurate automatic pain assessment.

    ]]>
    A Review of Automatic Pain Assessment from Facial Information Using Machine Learning Najib Ben Aoun doi: 10.3390/technologies12060092 Technologies 2024-06-20 Technologies 2024-06-20 12 6
    Review
    92 10.3390/technologies12060092 https://www.mdpi.com/2227-7080/12/6/92
    Technologies, Vol. 12, Pages 91: A Computational Framework for Enhancing Industrial Operations and Electric Network Management: A Case Study https://www.mdpi.com/2227-7080/12/6/91 Automotive industries require constant technological development and the capacity to adapt to market needs. Hence, component suppliers must be able to adapt to persistent trend changes and technical improvements, acting in response to customers’ expectations and developing their manufacturing methods to be as flexible as possible. Concepts such as layout flexibility, management of industrial facilities, and building information modeling (BIM) are becoming ever more addressed within the automotive industry in order to envision and select the necessary information exchanges. Given this question and based on the gap in the literature regarding this subject, this work proposes a solution, developing a novel tool that allows the monitoring and assignment of newer/relocated equipment to the switchboards within a given industrial plant. The solution intends to increase the flexibility of production lines through the assessment, analysis, improvement, and reorganization of the electrical load distribution to develop projects accurately implying layout changes. The tool is validated with an automotive manufacturer. With the implementation of this open-source tool, a detailed electrical flow management system is accomplished, and it has proven successful and essential in raising levels of organizational flexibility. This has guaranteed the company’s competitiveness with effective integrated administration methods and tools, such as a much easier study upon inserting new/relocated equipment without production line breaks. 2024-06-19 Technologies, Vol. 12, Pages 91: A Computational Framework for Enhancing Industrial Operations and Electric Network Management: A Case Study

    Technologies doi: 10.3390/technologies12060091

    Authors: André F. V. Pedroso Francisco J. G. Silva Raul D. S. G. Campilho Rita C. M. Sales-Contini Arnaldo G. Pinto Renato R. Moreira

    Automotive industries require constant technological development and the capacity to adapt to market needs. Hence, component suppliers must be able to adapt to persistent trend changes and technical improvements, acting in response to customers’ expectations and developing their manufacturing methods to be as flexible as possible. Concepts such as layout flexibility, management of industrial facilities, and building information modeling (BIM) are becoming ever more addressed within the automotive industry in order to envision and select the necessary information exchanges. Given this question and based on the gap in the literature regarding this subject, this work proposes a solution, developing a novel tool that allows the monitoring and assignment of newer/relocated equipment to the switchboards within a given industrial plant. The solution intends to increase the flexibility of production lines through the assessment, analysis, improvement, and reorganization of the electrical load distribution to develop projects accurately implying layout changes. The tool is validated with an automotive manufacturer. With the implementation of this open-source tool, a detailed electrical flow management system is accomplished, and it has proven successful and essential in raising levels of organizational flexibility. This has guaranteed the company’s competitiveness with effective integrated administration methods and tools, such as a much easier study upon inserting new/relocated equipment without production line breaks.

    ]]>
    A Computational Framework for Enhancing Industrial Operations and Electric Network Management: A Case Study André F. V. Pedroso Francisco J. G. Silva Raul D. S. G. Campilho Rita C. M. Sales-Contini Arnaldo G. Pinto Renato R. Moreira doi: 10.3390/technologies12060091 Technologies 2024-06-19 Technologies 2024-06-19 12 6
    Article
    91 10.3390/technologies12060091 https://www.mdpi.com/2227-7080/12/6/91
    Technologies, Vol. 12, Pages 90: A Modified Criss-Cross-Based T-Type MLI with Reduced Power Components https://www.mdpi.com/2227-7080/12/6/90 Significant advancements in the field of power electronics have created an ideal opportunity to introduce various topologies of multilevel inverters. These multilevel inverter topologies comprise different notable characteristics, such as staircase sinusoidal output voltage with high quality, a lowered number of power switches, no filter requirement, etc. In this literature, a new asymmetrical MLI topology is proposed to reduce the number of components of the inverter with admirable voltage-step creation. The proposed topology provides a 17-level, staircase-type, nearly sinusoidal output voltage waveform. The number of switches required for the proposed multilevel inverter topology is fewer compared to the existing topology for the same level. A carrier-based sinusoidal pulse-width modulation technique is used for the proposed topology at a switching frequency of 3 kHz. The functioning of the proposed inverter topology is thoroughly examined. A 17-level asymmetrical inverter is executed; both the MATLAB/SIMULINK as well as the experimental results using dSPACE-1103 controller. The simulation results are verified using the experimental results for the proposed 17-level multilevel inverter for modulation indexes of 1 and 0.6. 2024-06-18 Technologies, Vol. 12, Pages 90: A Modified Criss-Cross-Based T-Type MLI with Reduced Power Components

    Technologies doi: 10.3390/technologies12060090

    Authors: Kailash Kumar Mahto Bidyut Mahato Bikramaditya Chandan Durbanjali Das Priyanath Das Swati Kumari Vasiliki Vita Christos Pavlatos Georgios Fotis

    Significant advancements in the field of power electronics have created an ideal opportunity to introduce various topologies of multilevel inverters. These multilevel inverter topologies comprise different notable characteristics, such as staircase sinusoidal output voltage with high quality, a lowered number of power switches, no filter requirement, etc. In this literature, a new asymmetrical MLI topology is proposed to reduce the number of components of the inverter with admirable voltage-step creation. The proposed topology provides a 17-level, staircase-type, nearly sinusoidal output voltage waveform. The number of switches required for the proposed multilevel inverter topology is fewer compared to the existing topology for the same level. A carrier-based sinusoidal pulse-width modulation technique is used for the proposed topology at a switching frequency of 3 kHz. The functioning of the proposed inverter topology is thoroughly examined. A 17-level asymmetrical inverter is executed; both the MATLAB/SIMULINK as well as the experimental results using dSPACE-1103 controller. The simulation results are verified using the experimental results for the proposed 17-level multilevel inverter for modulation indexes of 1 and 0.6.

    ]]>
    A Modified Criss-Cross-Based T-Type MLI with Reduced Power Components Kailash Kumar Mahto Bidyut Mahato Bikramaditya Chandan Durbanjali Das Priyanath Das Swati Kumari Vasiliki Vita Christos Pavlatos Georgios Fotis doi: 10.3390/technologies12060090 Technologies 2024-06-18 Technologies 2024-06-18 12 6
    Article
    90 10.3390/technologies12060090 https://www.mdpi.com/2227-7080/12/6/90
    Technologies, Vol. 12, Pages 89: A New LCL Filter Design Method for Single-Phase Photovoltaic Systems Connected to the Grid via Micro-Inverters https://www.mdpi.com/2227-7080/12/6/89 This paper aims to propose a new sizing approach to reduce the footprint and optimize the performance of an LCL filter implemented in photovoltaic systems using grid-connected single-phase microinverters. In particular, the analysis is carried out on a single-phase full-bridge inverter, assuming the following two conditions: (1) a unit power factor at the connection point between the AC grid and the LCL filter; (2) a control circuit based on unipolar sinusoidal pulse width modulation (SPWM). In particular, the ripple and harmonics of the LCL filter input current and the current injected into the grid are analyzed. The results of the Simulink simulation and the experimental tests carried out confirm that it is possible to considerably reduce filter volume by optimizing each passive component compared with what is already available in the literature while guaranteeing excellent filtering performance. Specifically, the inductance values were reduced by almost 40% and the capacitor value by almost 100%. The main applications of this new design methodology are for use in single-phase microinverters connected to the grid and for research purposes in power electronics and optimization. 2024-06-12 Technologies, Vol. 12, Pages 89: A New LCL Filter Design Method for Single-Phase Photovoltaic Systems Connected to the Grid via Micro-Inverters

    Technologies doi: 10.3390/technologies12060089

    Authors: Heriberto Adamas-Pérez Mario Ponce-Silva Jesús Darío Mina-Antonio Abraham Claudio-Sánchez Omar Rodríguez-Benítez Oscar Miguel Rodríguez-Benítez

    This paper aims to propose a new sizing approach to reduce the footprint and optimize the performance of an LCL filter implemented in photovoltaic systems using grid-connected single-phase microinverters. In particular, the analysis is carried out on a single-phase full-bridge inverter, assuming the following two conditions: (1) a unit power factor at the connection point between the AC grid and the LCL filter; (2) a control circuit based on unipolar sinusoidal pulse width modulation (SPWM). In particular, the ripple and harmonics of the LCL filter input current and the current injected into the grid are analyzed. The results of the Simulink simulation and the experimental tests carried out confirm that it is possible to considerably reduce filter volume by optimizing each passive component compared with what is already available in the literature while guaranteeing excellent filtering performance. Specifically, the inductance values were reduced by almost 40% and the capacitor value by almost 100%. The main applications of this new design methodology are for use in single-phase microinverters connected to the grid and for research purposes in power electronics and optimization.

    ]]>
    A New LCL Filter Design Method for Single-Phase Photovoltaic Systems Connected to the Grid via Micro-Inverters Heriberto Adamas-Pérez Mario Ponce-Silva Jesús Darío Mina-Antonio Abraham Claudio-Sánchez Omar Rodríguez-Benítez Oscar Miguel Rodríguez-Benítez doi: 10.3390/technologies12060089 Technologies 2024-06-12 Technologies 2024-06-12 12 6
    Article
    89 10.3390/technologies12060089 https://www.mdpi.com/2227-7080/12/6/89
    Technologies, Vol. 12, Pages 88: Accurate Surge Arrester Modeling for Optimal Risk-Aware Lightning Protection Utilizing a Hybrid Monte Carlo–Particle Swarm Optimization Algorithm https://www.mdpi.com/2227-7080/12/6/88 The application of arresters is critical for the safe operation of electric grids against lightning. Arresters limit the consequences of lightning-induced over-voltages. However, surge arrester protection in electric grids is challenging due to the intrinsic complexities of distribution grids, including overhead lines and power components such as transformers. In this paper, an optimal arrester placement technique is developed by proposing a multi-objective function that includes technical, safety and risk, and economic indices. However, an effective placement model demands a comprehensive and accurate modeling of an electric grid’s components. In this light, appropriate models of a grid’s components including an arrester, the earth, an oil-immersed transformer, overhead lines, and lightning-induced voltage are developed. To achieve accurate models, high-frequency transient mathematical models are developed for the grid’s components. Notably, to have an accurate model of the arrester, which critically impacts the performance of the arrester placement technique, a new arrester model is developed and evaluated based on real technical data from manufacturers such as Pars, Tridelta, and Siemens. Then, the proposed model is compared with the IEEE, Fernandez, and Pinceti models. The arrester model is incorporated in an optimization problem considering the performance of the over-voltage protection and the risk, technical, and economic indices, and it is solved using the particle swarm optimization (PSO) and Monte Carlo (MC) techniques. To validate the proposed arrester model and the placement technique, real data from the Chopoghloo feeder in Bahar, Hamedan, Iran, are simulated. The feeder is expanded over three different geographical areas, including rural, agricultural plain, and mountainous areas. 2024-06-08 Technologies, Vol. 12, Pages 88: Accurate Surge Arrester Modeling for Optimal Risk-Aware Lightning Protection Utilizing a Hybrid Monte Carlo–Particle Swarm Optimization Algorithm

    Technologies doi: 10.3390/technologies12060088

    Authors: Amir Hossein Kimiai Asadi Mohsen Eskandari Hadi Delavari

    The application of arresters is critical for the safe operation of electric grids against lightning. Arresters limit the consequences of lightning-induced over-voltages. However, surge arrester protection in electric grids is challenging due to the intrinsic complexities of distribution grids, including overhead lines and power components such as transformers. In this paper, an optimal arrester placement technique is developed by proposing a multi-objective function that includes technical, safety and risk, and economic indices. However, an effective placement model demands a comprehensive and accurate modeling of an electric grid’s components. In this light, appropriate models of a grid’s components including an arrester, the earth, an oil-immersed transformer, overhead lines, and lightning-induced voltage are developed. To achieve accurate models, high-frequency transient mathematical models are developed for the grid’s components. Notably, to have an accurate model of the arrester, which critically impacts the performance of the arrester placement technique, a new arrester model is developed and evaluated based on real technical data from manufacturers such as Pars, Tridelta, and Siemens. Then, the proposed model is compared with the IEEE, Fernandez, and Pinceti models. The arrester model is incorporated in an optimization problem considering the performance of the over-voltage protection and the risk, technical, and economic indices, and it is solved using the particle swarm optimization (PSO) and Monte Carlo (MC) techniques. To validate the proposed arrester model and the placement technique, real data from the Chopoghloo feeder in Bahar, Hamedan, Iran, are simulated. The feeder is expanded over three different geographical areas, including rural, agricultural plain, and mountainous areas.

    ]]>
    Accurate Surge Arrester Modeling for Optimal Risk-Aware Lightning Protection Utilizing a Hybrid Monte Carlo–Particle Swarm Optimization Algorithm Amir Hossein Kimiai Asadi Mohsen Eskandari Hadi Delavari doi: 10.3390/technologies12060088 Technologies 2024-06-08 Technologies 2024-06-08 12 6
    Article
    88 10.3390/technologies12060088 https://www.mdpi.com/2227-7080/12/6/88
    Technologies, Vol. 12, Pages 87: Electron Energy-Loss Spectroscopy Method for Thin-Film Thickness Calculations with a Low Incident Energy Electron Beam https://www.mdpi.com/2227-7080/12/6/87 In this study, the thickness of a thin film (tc) at a low primary electron energy of less than or equal to 10 keV was calculated using electron energy-loss spectroscopy. This method uses the ratio of the intensity of the transmitted background spectrum to the intensity of the transmission electrons with zero-loss energy (elastic) in the presence of an accurate average inelastic free path length (λ). The Monte Carlo model was used to simulate the interaction between the electron beam and the tested thin films. The total background of the transmitted electrons is considered to be the electron transmitting the film with an energy above 50 eV to eliminate the effect of the secondary electrons. The method was used at low primary electron energy to measure the thickness (t) of C, Si, Cr, Cu, Ag, and Au films below 12 nm. For the C and Si films, the accuracy of the thickness calculation increased as the energy of the primary electrons and thickness of the film increased. However, for heavy elements, the accuracy of the film thickness calculations increased as the primary electron energy increased and the film thickness decreased. High accuracy (with 2% uncertainty) in the measurement of C and Si thin films was observed at large thicknesses and 10 keV, where tλ≈1. However, in the case of heavy-element films, the highest accuracy (with an uncertainty below 8%) was found for thin thicknesses and 10 keV, where tλ≤0.29. The present results show that an accurate film thickness measurement can be obtained at primary electron energy equal to or less than 10 keV and a ratio of tλ≤2. This method demonstrates the potential of low-loss electron energy-loss spectroscopy in transmission electron microscopy as a fast and straightforward method for determining the thin-film thickness of the material under investigation at low primary electron energies. 2024-06-07 Technologies, Vol. 12, Pages 87: Electron Energy-Loss Spectroscopy Method for Thin-Film Thickness Calculations with a Low Incident Energy Electron Beam

    Technologies doi: 10.3390/technologies12060087

    Authors: Ahmad M. D. (Assa’d) Jaber Ammar Alsoud Saleh R. Al-Bashaish Hmoud Al Dmour Marwan S. Mousa Tomáš Trčka Vladimír Holcman Dinara Sobola

    In this study, the thickness of a thin film (tc) at a low primary electron energy of less than or equal to 10 keV was calculated using electron energy-loss spectroscopy. This method uses the ratio of the intensity of the transmitted background spectrum to the intensity of the transmission electrons with zero-loss energy (elastic) in the presence of an accurate average inelastic free path length (λ). The Monte Carlo model was used to simulate the interaction between the electron beam and the tested thin films. The total background of the transmitted electrons is considered to be the electron transmitting the film with an energy above 50 eV to eliminate the effect of the secondary electrons. The method was used at low primary electron energy to measure the thickness (t) of C, Si, Cr, Cu, Ag, and Au films below 12 nm. For the C and Si films, the accuracy of the thickness calculation increased as the energy of the primary electrons and thickness of the film increased. However, for heavy elements, the accuracy of the film thickness calculations increased as the primary electron energy increased and the film thickness decreased. High accuracy (with 2% uncertainty) in the measurement of C and Si thin films was observed at large thicknesses and 10 keV, where tλ≈1. However, in the case of heavy-element films, the highest accuracy (with an uncertainty below 8%) was found for thin thicknesses and 10 keV, where tλ≤0.29. The present results show that an accurate film thickness measurement can be obtained at primary electron energy equal to or less than 10 keV and a ratio of tλ≤2. This method demonstrates the potential of low-loss electron energy-loss spectroscopy in transmission electron microscopy as a fast and straightforward method for determining the thin-film thickness of the material under investigation at low primary electron energies.

    ]]>
    Electron Energy-Loss Spectroscopy Method for Thin-Film Thickness Calculations with a Low Incident Energy Electron Beam Ahmad M. D. (Assa’d) Jaber Ammar Alsoud Saleh R. Al-Bashaish Hmoud Al Dmour Marwan S. Mousa Tomáš Trčka Vladimír Holcman Dinara Sobola doi: 10.3390/technologies12060087 Technologies 2024-06-07 Technologies 2024-06-07 12 6
    Article
    87 10.3390/technologies12060087 https://www.mdpi.com/2227-7080/12/6/87
    Technologies, Vol. 12, Pages 86: Advancements in 3D Printing: Directed Energy Deposition Techniques, Defect Analysis, and Quality Monitoring https://www.mdpi.com/2227-7080/12/6/86 This paper provides a comprehensive analysis of recent advancements in additive manufacturing, a transformative approach to industrial production that allows for the layer-by-layer construction of complex parts directly from digital models. Focusing specifically on Directed Energy Deposition, it begins by clarifying the fundamental principles of metal additive manufacturing as defined by International Organization of Standardization and American Society for Testing and Materials standards, with an emphasis on laser- and powder-based methods that are pivotal to Directed Energy Deposition. It explores the critical process mechanisms that can lead to defect formation in the manufactured parts, offering in-depth insights into the factors that influence these outcomes. Additionally, the unique mechanisms of defect formation inherent to Directed Energy Deposition are examined in detail. The review also covers the current landscape of process evaluation and non-destructive testing methods essential for quality assurance, including both traditional and contemporary in situ monitoring techniques, with a particular focus given to advanced machine-vision-based methods for geometric analysis. Furthermore, the integration of process monitoring, multiphysics simulation models, and data analytics is discussed, charting a forward-looking roadmap for the development of Digital Twins in Laser–Powder-based Directed Energy Deposition. Finally, this review highlights critical research gaps and proposes directions for future research to enhance the accuracy and efficiency of Directed Energy Deposition systems. 2024-06-07 Technologies, Vol. 12, Pages 86: Advancements in 3D Printing: Directed Energy Deposition Techniques, Defect Analysis, and Quality Monitoring

    Technologies doi: 10.3390/technologies12060086

    Authors: Muhammad Mu’az Imran Azam Che Idris Liyanage Chandratilak De Silva Yun-Bae Kim Pg Emeroylariffion Abas

    This paper provides a comprehensive analysis of recent advancements in additive manufacturing, a transformative approach to industrial production that allows for the layer-by-layer construction of complex parts directly from digital models. Focusing specifically on Directed Energy Deposition, it begins by clarifying the fundamental principles of metal additive manufacturing as defined by International Organization of Standardization and American Society for Testing and Materials standards, with an emphasis on laser- and powder-based methods that are pivotal to Directed Energy Deposition. It explores the critical process mechanisms that can lead to defect formation in the manufactured parts, offering in-depth insights into the factors that influence these outcomes. Additionally, the unique mechanisms of defect formation inherent to Directed Energy Deposition are examined in detail. The review also covers the current landscape of process evaluation and non-destructive testing methods essential for quality assurance, including both traditional and contemporary in situ monitoring techniques, with a particular focus given to advanced machine-vision-based methods for geometric analysis. Furthermore, the integration of process monitoring, multiphysics simulation models, and data analytics is discussed, charting a forward-looking roadmap for the development of Digital Twins in Laser–Powder-based Directed Energy Deposition. Finally, this review highlights critical research gaps and proposes directions for future research to enhance the accuracy and efficiency of Directed Energy Deposition systems.

    ]]>
    Advancements in 3D Printing: Directed Energy Deposition Techniques, Defect Analysis, and Quality Monitoring Muhammad Mu’az Imran Azam Che Idris Liyanage Chandratilak De Silva Yun-Bae Kim Pg Emeroylariffion Abas doi: 10.3390/technologies12060086 Technologies 2024-06-07 Technologies 2024-06-07 12 6
    Review
    86 10.3390/technologies12060086 https://www.mdpi.com/2227-7080/12/6/86
    Technologies, Vol. 12, Pages 85: Behind the Door: Practical Parameterization of Propagation Parameters for IEEE 802.11ad Use Cases https://www.mdpi.com/2227-7080/12/6/85 The integration of the 60 GHz band into the IEEE 802.11 standard has revolutionized indoor wireless services. However, this band presents unique challenges to indoor wireless communication infrastructure, originally designed to handle data traffic in residential and office environments. Estimating 60 GHz signal propagation in indoor settings is particularly complicated due to dynamic contextual factors, making it essential to ensure adequate coverage for all connected devices. Consequently, empirical channel modeling plays a pivotal role in understanding real-world behavior, which is characterized by a complex interplay of stationary and mobile elements. Given the highly directional nature of 60 GHz propagation, this study addresses a seemingly simple but important question: what is the impact of employing highly directive antennas when deviating from the line of sight? To address this question, we conducted an empirical measurement campaign of wireless channels within an office environment. Our assessment focused on power losses and distribution within an angular range while an indoor base station served indoor users, simulating the operation of an IEEE 802.11ad high-speed WLAN at 60 GHz. Additionally, we explored scenarios with and without pedestrian movement in the vicinity of wireless terminals. Our observations reveal the presence of significant antenna lobes even in obstructed links, indicating potential opportunities to use angular combiners or beamformers to enhance link availability and the data rate. This empirical study provides valuable information and channel parameters to simulate 60 GHz millimeter wave (mm-wave) links in indoor environments, paving the way for more efficient and robust wireless communication systems. 2024-06-07 Technologies, Vol. 12, Pages 85: Behind the Door: Practical Parameterization of Propagation Parameters for IEEE 802.11ad Use Cases

    Technologies doi: 10.3390/technologies12060085

    Authors: Luciano Ahumada Erick Carreño Albert Anglès Diego Dujovne Pablo Palacios Játiva

    The integration of the 60 GHz band into the IEEE 802.11 standard has revolutionized indoor wireless services. However, this band presents unique challenges to indoor wireless communication infrastructure, originally designed to handle data traffic in residential and office environments. Estimating 60 GHz signal propagation in indoor settings is particularly complicated due to dynamic contextual factors, making it essential to ensure adequate coverage for all connected devices. Consequently, empirical channel modeling plays a pivotal role in understanding real-world behavior, which is characterized by a complex interplay of stationary and mobile elements. Given the highly directional nature of 60 GHz propagation, this study addresses a seemingly simple but important question: what is the impact of employing highly directive antennas when deviating from the line of sight? To address this question, we conducted an empirical measurement campaign of wireless channels within an office environment. Our assessment focused on power losses and distribution within an angular range while an indoor base station served indoor users, simulating the operation of an IEEE 802.11ad high-speed WLAN at 60 GHz. Additionally, we explored scenarios with and without pedestrian movement in the vicinity of wireless terminals. Our observations reveal the presence of significant antenna lobes even in obstructed links, indicating potential opportunities to use angular combiners or beamformers to enhance link availability and the data rate. This empirical study provides valuable information and channel parameters to simulate 60 GHz millimeter wave (mm-wave) links in indoor environments, paving the way for more efficient and robust wireless communication systems.

    ]]>
    Behind the Door: Practical Parameterization of Propagation Parameters for IEEE 802.11ad Use Cases Luciano Ahumada Erick Carreño Albert Anglès Diego Dujovne Pablo Palacios Játiva doi: 10.3390/technologies12060085 Technologies 2024-06-07 Technologies 2024-06-07 12 6
    Article
    85 10.3390/technologies12060085 https://www.mdpi.com/2227-7080/12/6/85
    Technologies, Vol. 12, Pages 84: Dual-Band Antenna at 28 and 38 GHz Using Internal Stubs and Slot Perturbations https://www.mdpi.com/2227-7080/12/6/84 A double-stub matching technique is used to design a dual-band monopole antenna at 28 and 38 GHz. The transmission line stubs represent the matching elements. The first matching network comprises series capacitive and inductive stubs, causing impedance matching at the 28 GHz band with a wide bandwidth. On the other hand, the second matching network has two shunt inductive stubs, generating resonance at 38 GHz. A Smith chart is utilized to predict the stub lengths. While incorporating their dimensions physically, some of the stub lengths are fine-tuned. The proposed antenna is compact with a profile of 0.75λ1×0.66λ1 (where λ1 is the free-space wavelength at 28 GHz). The measured bandwidths are 27–28.75 GHz and 36.20–42.43 GHz. Although the physical series capacitance of the first matching network is a slot in the ground plane, the antenna is able to achieve a good gain of 7 dBi in both bands. The proposed antenna has a compact design, good bandwidth and gain, making it a candidate for 5G wireless applications. 2024-06-06 Technologies, Vol. 12, Pages 84: Dual-Band Antenna at 28 and 38 GHz Using Internal Stubs and Slot Perturbations

    Technologies doi: 10.3390/technologies12060084

    Authors: Parveez Shariff Bhadravathi Ghouse Pradeep Kumar Pallavi R. Mane Sameena Pathan Tanweer Ali Alexandros-Apostolos A. Boulogeorgos Jaume Anguera

    A double-stub matching technique is used to design a dual-band monopole antenna at 28 and 38 GHz. The transmission line stubs represent the matching elements. The first matching network comprises series capacitive and inductive stubs, causing impedance matching at the 28 GHz band with a wide bandwidth. On the other hand, the second matching network has two shunt inductive stubs, generating resonance at 38 GHz. A Smith chart is utilized to predict the stub lengths. While incorporating their dimensions physically, some of the stub lengths are fine-tuned. The proposed antenna is compact with a profile of 0.75λ1×0.66λ1 (where λ1 is the free-space wavelength at 28 GHz). The measured bandwidths are 27–28.75 GHz and 36.20–42.43 GHz. Although the physical series capacitance of the first matching network is a slot in the ground plane, the antenna is able to achieve a good gain of 7 dBi in both bands. The proposed antenna has a compact design, good bandwidth and gain, making it a candidate for 5G wireless applications.

    ]]>
    Dual-Band Antenna at 28 and 38 GHz Using Internal Stubs and Slot Perturbations Parveez Shariff Bhadravathi Ghouse Pradeep Kumar Pallavi R. Mane Sameena Pathan Tanweer Ali Alexandros-Apostolos A. Boulogeorgos Jaume Anguera doi: 10.3390/technologies12060084 Technologies 2024-06-06 Technologies 2024-06-06 12 6
    Article
    84 10.3390/technologies12060084 https://www.mdpi.com/2227-7080/12/6/84
    Technologies, Vol. 12, Pages 83: Data Readout Techniques on FPGA for the ATLAS RPC-BIS78 Detectors https://www.mdpi.com/2227-7080/12/6/83 The firmware developed for the readout and trigger processing of the information emerging from the BIS78-RPC Muon Spectrometer chambers in the ATLAS experiment at CERN is presented here, together with data processing techniques, data acquisition software, and tests of the readout chain system, which represent efforts to make these chambers operational in the ATLAS experiment. This work is performed in the context of the BIS78-RPC project, which deals with the pilot deployment of a new generation of sMDT+RPCs in the experiment. Such chambers are planned to be fully deployed in the whole barrel inner layer of the Muon Spectrometer during the Phase II upgrade of the ATLAS experiment. On-chamber front-ends include an amplifier, a discriminator ASIC, and an LVDS transmitter. The signal is digitized by CERN HPTDC chips and then processed by an FPGA, which is the heart of the readout and trigger processing, using various techniques. 2024-06-04 Technologies, Vol. 12, Pages 83: Data Readout Techniques on FPGA for the ATLAS RPC-BIS78 Detectors

    Technologies doi: 10.3390/technologies12060083

    Authors: Andreas Vgenopoulos Kostas Kordas Federico Lasagni Sabrina Perrella Alessandro Polini Riccardo Vari

    The firmware developed for the readout and trigger processing of the information emerging from the BIS78-RPC Muon Spectrometer chambers in the ATLAS experiment at CERN is presented here, together with data processing techniques, data acquisition software, and tests of the readout chain system, which represent efforts to make these chambers operational in the ATLAS experiment. This work is performed in the context of the BIS78-RPC project, which deals with the pilot deployment of a new generation of sMDT+RPCs in the experiment. Such chambers are planned to be fully deployed in the whole barrel inner layer of the Muon Spectrometer during the Phase II upgrade of the ATLAS experiment. On-chamber front-ends include an amplifier, a discriminator ASIC, and an LVDS transmitter. The signal is digitized by CERN HPTDC chips and then processed by an FPGA, which is the heart of the readout and trigger processing, using various techniques.

    ]]>
    Data Readout Techniques on FPGA for the ATLAS RPC-BIS78 Detectors Andreas Vgenopoulos Kostas Kordas Federico Lasagni Sabrina Perrella Alessandro Polini Riccardo Vari doi: 10.3390/technologies12060083 Technologies 2024-06-04 Technologies 2024-06-04 12 6
    Article
    83 10.3390/technologies12060083 https://www.mdpi.com/2227-7080/12/6/83
    Technologies, Vol. 12, Pages 82: Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms https://www.mdpi.com/2227-7080/12/6/82 Machine learning technologies are being integrated into robotic systems faster to enhance their efficacy and adaptability in dynamic environments. The primary goal of this research was to propose a method to develop an Autonomous Mobile Robot (AMR) that integrates Simultaneous Localization and Mapping (SLAM), odometry, and artificial vision based on deep learning (DL). All are executed on a high-performance Jetson Nano embedded system, specifically emphasizing SLAM-based obstacle avoidance and path planning using the Adaptive Monte Carlo Localization (AMCL) algorithm. Two Convolutional Neural Networks (CNNs) were selected due to their proven effectiveness in image and pattern recognition tasks. The ResNet18 and YOLOv3 algorithms facilitate scene perception, enabling the robot to interpret its environment effectively. Both algorithms were implemented for real-time object detection, identifying and classifying objects within the robot’s environment. These algorithms were selected to evaluate their performance metrics, which are critical for real-time applications. A comparative analysis of the proposed DL models focused on enhancing vision systems for autonomous mobile robots. Several simulations and real-world trials were conducted to evaluate the performance and adaptability of these models in navigating complex environments. The proposed vision system with CNN ResNet18 achieved an average accuracy of 98.5%, a precision of 96.91%, a recall of 97%, and an F1-score of 98.5%. However, the YOLOv3 model achieved an average accuracy of 96%, a precision of 96.2%, a recall of 96%, and an F1-score of 95.99%. These results underscore the effectiveness of the proposed intelligent algorithms, robust embedded hardware, and sensors in robotic applications. This study proves that advanced DL algorithms work well in robots and could be used in many fields, such as transportation and assembly. As a consequence of the findings, intelligent systems could be implemented more widely in the operation and development of AMRs. 2024-06-03 Technologies, Vol. 12, Pages 82: Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms

    Technologies doi: 10.3390/technologies12060082

    Authors: Jorge Galarza-Falfan Enrique Efrén García-Guerrero Oscar Adrian Aguirre-Castro Oscar Roberto López-Bonilla Ulises Jesús Tamayo-Pérez José Ricardo Cárdenas-Valdez Carlos Hernández-Mejía Susana Borrego-Dominguez Everardo Inzunza-Gonzalez

    Machine learning technologies are being integrated into robotic systems faster to enhance their efficacy and adaptability in dynamic environments. The primary goal of this research was to propose a method to develop an Autonomous Mobile Robot (AMR) that integrates Simultaneous Localization and Mapping (SLAM), odometry, and artificial vision based on deep learning (DL). All are executed on a high-performance Jetson Nano embedded system, specifically emphasizing SLAM-based obstacle avoidance and path planning using the Adaptive Monte Carlo Localization (AMCL) algorithm. Two Convolutional Neural Networks (CNNs) were selected due to their proven effectiveness in image and pattern recognition tasks. The ResNet18 and YOLOv3 algorithms facilitate scene perception, enabling the robot to interpret its environment effectively. Both algorithms were implemented for real-time object detection, identifying and classifying objects within the robot’s environment. These algorithms were selected to evaluate their performance metrics, which are critical for real-time applications. A comparative analysis of the proposed DL models focused on enhancing vision systems for autonomous mobile robots. Several simulations and real-world trials were conducted to evaluate the performance and adaptability of these models in navigating complex environments. The proposed vision system with CNN ResNet18 achieved an average accuracy of 98.5%, a precision of 96.91%, a recall of 97%, and an F1-score of 98.5%. However, the YOLOv3 model achieved an average accuracy of 96%, a precision of 96.2%, a recall of 96%, and an F1-score of 95.99%. These results underscore the effectiveness of the proposed intelligent algorithms, robust embedded hardware, and sensors in robotic applications. This study proves that advanced DL algorithms work well in robots and could be used in many fields, such as transportation and assembly. As a consequence of the findings, intelligent systems could be implemented more widely in the operation and development of AMRs.

    ]]>
    Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms Jorge Galarza-Falfan Enrique Efrén García-Guerrero Oscar Adrian Aguirre-Castro Oscar Roberto López-Bonilla Ulises Jesús Tamayo-Pérez José Ricardo Cárdenas-Valdez Carlos Hernández-Mejía Susana Borrego-Dominguez Everardo Inzunza-Gonzalez doi: 10.3390/technologies12060082 Technologies 2024-06-03 Technologies 2024-06-03 12 6
    Article
    82 10.3390/technologies12060082 https://www.mdpi.com/2227-7080/12/6/82
    Technologies, Vol. 12, Pages 81: A Survey of Machine Learning in Edge Computing: Techniques, Frameworks, Applications, Issues, and Research Directions https://www.mdpi.com/2227-7080/12/6/81 Internet of Things (IoT) devices often operate with limited resources while interacting with users and their environment, generating a wealth of data. Machine learning models interpret such sensor data, enabling accurate predictions and informed decisions. However, the sheer volume of data from billions of devices can overwhelm networks, making traditional cloud data processing inefficient for IoT applications. This paper presents a comprehensive survey of recent advances in models, architectures, hardware, and design requirements for deploying machine learning on low-resource devices at the edge and in cloud networks. Prominent IoT devices tailored to integrate edge intelligence include Raspberry Pi, NVIDIA’s Jetson, Arduino Nano 33 BLE Sense, STM32 Microcontrollers, SparkFun Edge, Google Coral Dev Board, and Beaglebone AI. These devices are boosted with custom AI frameworks, such as TensorFlow Lite, OpenEI, Core ML, Caffe2, and MXNet, to empower ML and DL tasks (e.g., object detection and gesture recognition). Both traditional machine learning (e.g., random forest, logistic regression) and deep learning methods (e.g., ResNet-50, YOLOv4, LSTM) are deployed on devices, distributed edge, and distributed cloud computing. Moreover, we analyzed 1000 recent publications on “ML in IoT” from IEEE Xplore using support vector machine, random forest, and decision tree classifiers to identify emerging topics and application domains. Hot topics included big data, cloud, edge, multimedia, security, privacy, QoS, and activity recognition, while critical domains included industry, healthcare, agriculture, transportation, smart homes and cities, and assisted living. The major challenges hindering the implementation of edge machine learning include encrypting sensitive user data for security and privacy on edge devices, efficiently managing resources of edge nodes through distributed learning architectures, and balancing the energy limitations of edge devices and the energy demands of machine learning. 2024-06-03 Technologies, Vol. 12, Pages 81: A Survey of Machine Learning in Edge Computing: Techniques, Frameworks, Applications, Issues, and Research Directions

    Technologies doi: 10.3390/technologies12060081

    Authors: Oumayma Jouini Kaouthar Sethom Abdallah Namoun Nasser Aljohani Meshari Huwaytim Alanazi Mohammad N. Alanazi

    Internet of Things (IoT) devices often operate with limited resources while interacting with users and their environment, generating a wealth of data. Machine learning models interpret such sensor data, enabling accurate predictions and informed decisions. However, the sheer volume of data from billions of devices can overwhelm networks, making traditional cloud data processing inefficient for IoT applications. This paper presents a comprehensive survey of recent advances in models, architectures, hardware, and design requirements for deploying machine learning on low-resource devices at the edge and in cloud networks. Prominent IoT devices tailored to integrate edge intelligence include Raspberry Pi, NVIDIA’s Jetson, Arduino Nano 33 BLE Sense, STM32 Microcontrollers, SparkFun Edge, Google Coral Dev Board, and Beaglebone AI. These devices are boosted with custom AI frameworks, such as TensorFlow Lite, OpenEI, Core ML, Caffe2, and MXNet, to empower ML and DL tasks (e.g., object detection and gesture recognition). Both traditional machine learning (e.g., random forest, logistic regression) and deep learning methods (e.g., ResNet-50, YOLOv4, LSTM) are deployed on devices, distributed edge, and distributed cloud computing. Moreover, we analyzed 1000 recent publications on “ML in IoT” from IEEE Xplore using support vector machine, random forest, and decision tree classifiers to identify emerging topics and application domains. Hot topics included big data, cloud, edge, multimedia, security, privacy, QoS, and activity recognition, while critical domains included industry, healthcare, agriculture, transportation, smart homes and cities, and assisted living. The major challenges hindering the implementation of edge machine learning include encrypting sensitive user data for security and privacy on edge devices, efficiently managing resources of edge nodes through distributed learning architectures, and balancing the energy limitations of edge devices and the energy demands of machine learning.

    ]]>
    A Survey of Machine Learning in Edge Computing: Techniques, Frameworks, Applications, Issues, and Research Directions Oumayma Jouini Kaouthar Sethom Abdallah Namoun Nasser Aljohani Meshari Huwaytim Alanazi Mohammad N. Alanazi doi: 10.3390/technologies12060081 Technologies 2024-06-03 Technologies 2024-06-03 12 6
    Review
    81 10.3390/technologies12060081 https://www.mdpi.com/2227-7080/12/6/81
    Technologies, Vol. 12, Pages 80: Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair https://www.mdpi.com/2227-7080/12/6/80 The independence and autonomy of both elderly and disabled people have been a growing concern in today’s society. Therefore, wheelchairs have proven to be fundamental for the movement of these people with physical disabilities in the lower limbs, paralysis, or other type of restrictive diseases. Various adapted sensors can be employed in order to facilitate the wheelchair’s driving experience. This work develops the proof concept of a brain–computer interface (BCI), whose ultimate final goal will be to control an intelligent wheelchair. An event-related (de)synchronization neuro-mechanism will be used, since it corresponds to a synchronization, or desynchronization, in the mu and beta brain rhythms, during the execution, preparation, or imagination of motor actions. Two datasets were used for algorithm development: one from the IV competition of BCIs (A), acquired through twenty-two Ag/AgCl electrodes and encompassing motor imagery of the right and left hands, and feet; and the other (B) was obtained in the laboratory using an Emotiv EPOC headset, also with the same motor imaginary. Regarding feature extraction, several approaches were tested: namely, two versions of the signal’s power spectral density, followed by a filter bank version; the use of respective frequency coefficients; and, finally, two versions of the known method filter bank common spatial pattern (FBCSP). Concerning the results from the second version of FBCSP, dataset A presented an F1-score of 0.797 and a rather low false positive rate of 0.150. Moreover, the correspondent average kappa score reached the value of 0.693, which is in the same order of magnitude as 0.57, obtained by the competition. Regarding dataset B, the average value of the F1-score was 0.651, followed by a kappa score of 0.447, and a false positive rate of 0.471. However, it should be noted that some subjects from this dataset presented F1-scores of 0.747 and 0.911, suggesting that the movement imagery (MI) aptness of different users may influence their performance. In conclusion, it is possible to obtain promising results, using an architecture for a real-time application. 2024-06-03 Technologies, Vol. 12, Pages 80: Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair

    Technologies doi: 10.3390/technologies12060080

    Authors: Maria Carolina Avelar Patricia Almeida Brigida Monica Faria Luis Paulo Reis

    The independence and autonomy of both elderly and disabled people have been a growing concern in today’s society. Therefore, wheelchairs have proven to be fundamental for the movement of these people with physical disabilities in the lower limbs, paralysis, or other type of restrictive diseases. Various adapted sensors can be employed in order to facilitate the wheelchair’s driving experience. This work develops the proof concept of a brain–computer interface (BCI), whose ultimate final goal will be to control an intelligent wheelchair. An event-related (de)synchronization neuro-mechanism will be used, since it corresponds to a synchronization, or desynchronization, in the mu and beta brain rhythms, during the execution, preparation, or imagination of motor actions. Two datasets were used for algorithm development: one from the IV competition of BCIs (A), acquired through twenty-two Ag/AgCl electrodes and encompassing motor imagery of the right and left hands, and feet; and the other (B) was obtained in the laboratory using an Emotiv EPOC headset, also with the same motor imaginary. Regarding feature extraction, several approaches were tested: namely, two versions of the signal’s power spectral density, followed by a filter bank version; the use of respective frequency coefficients; and, finally, two versions of the known method filter bank common spatial pattern (FBCSP). Concerning the results from the second version of FBCSP, dataset A presented an F1-score of 0.797 and a rather low false positive rate of 0.150. Moreover, the correspondent average kappa score reached the value of 0.693, which is in the same order of magnitude as 0.57, obtained by the competition. Regarding dataset B, the average value of the F1-score was 0.651, followed by a kappa score of 0.447, and a false positive rate of 0.471. However, it should be noted that some subjects from this dataset presented F1-scores of 0.747 and 0.911, suggesting that the movement imagery (MI) aptness of different users may influence their performance. In conclusion, it is possible to obtain promising results, using an architecture for a real-time application.

    ]]>
    Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair Maria Carolina Avelar Patricia Almeida Brigida Monica Faria Luis Paulo Reis doi: 10.3390/technologies12060080 Technologies 2024-06-03 Technologies 2024-06-03 12 6
    Article
    80 10.3390/technologies12060080 https://www.mdpi.com/2227-7080/12/6/80
    Technologies, Vol. 12, Pages 79: Comparison of a Custom-Made Inexpensive Air Permeability Tester with a Standardized Measurement Instrument https://www.mdpi.com/2227-7080/12/6/79 The air permeability of a textile fabric belongs to the parameters which characterize its potential applications as garments, filters, airbags, etc. Calculating the air permeability is complicated due to its dependence on many other fabric parameters, such as porosity, thickness, weaving parameters and others, which is why the air permeability is usually measured. Standardized measurement instruments according to EN ISO 9237, however, are expensive and complex, prohibiting small companies or many universities from using them. This is why a simpler and inexpensive test instrument was suggested in a previous paper. Here, we show correlations between the results of the standardized and the custom-made instrument and verify this correlation using fluid dynamics calculations. 2024-06-02 Technologies, Vol. 12, Pages 79: Comparison of a Custom-Made Inexpensive Air Permeability Tester with a Standardized Measurement Instrument

    Technologies doi: 10.3390/technologies12060079

    Authors: Dietrich Spädt Niclas Richter Cornelia Golle Andrea Ehrmann Lilia Sabantina

    The air permeability of a textile fabric belongs to the parameters which characterize its potential applications as garments, filters, airbags, etc. Calculating the air permeability is complicated due to its dependence on many other fabric parameters, such as porosity, thickness, weaving parameters and others, which is why the air permeability is usually measured. Standardized measurement instruments according to EN ISO 9237, however, are expensive and complex, prohibiting small companies or many universities from using them. This is why a simpler and inexpensive test instrument was suggested in a previous paper. Here, we show correlations between the results of the standardized and the custom-made instrument and verify this correlation using fluid dynamics calculations.

    ]]>
    Comparison of a Custom-Made Inexpensive Air Permeability Tester with a Standardized Measurement Instrument Dietrich Spädt Niclas Richter Cornelia Golle Andrea Ehrmann Lilia Sabantina doi: 10.3390/technologies12060079 Technologies 2024-06-02 Technologies 2024-06-02 12 6
    Article
    79 10.3390/technologies12060079 https://www.mdpi.com/2227-7080/12/6/79
    Technologies, Vol. 12, Pages 78: Smart Energy Systems Based on Next-Generation Power Electronic Devices https://www.mdpi.com/2227-7080/12/6/78 Power electronics plays a key role in the management and conversion of electrical energy in a variety of applications, including the use of renewable energy sources such as solar, wind and hydrogen energy, as well as in electric vehicles, industrial technologies, homes and smart grids. These technologies are essential for the successful implementation of the green transition, as they help reduce carbon emissions and promote the production and consumption of cleaner and more sustainable energy. The present work presents a new generation of power electronic devices and systems, which includes the following main aspects: advances in semiconductor technologies, such as the use of silicon carbide (SiC) and gallium nitride (GaN); nanomaterials for the realization of magnetic components; using a modular principle to construct power electronic devices; applying artificial intelligence techniques to device lifecycle design; and the environmental aspects of design. The new materials allow the devices to operate at higher voltages, temperatures and frequencies, making them ideal for high-power applications and high-frequency operation. In addition, the development of integrated and modular power electronic systems that combine energy management, diagnostics and communication capabilities contributes to the more intelligent and efficient management of energy resources. This includes integration with the Internet of Things (IoT) and artificial intelligence (AI) for automated task solving and work optimization. 2024-06-01 Technologies, Vol. 12, Pages 78: Smart Energy Systems Based on Next-Generation Power Electronic Devices

    Technologies doi: 10.3390/technologies12060078

    Authors: Nikolay Hinov

    Power electronics plays a key role in the management and conversion of electrical energy in a variety of applications, including the use of renewable energy sources such as solar, wind and hydrogen energy, as well as in electric vehicles, industrial technologies, homes and smart grids. These technologies are essential for the successful implementation of the green transition, as they help reduce carbon emissions and promote the production and consumption of cleaner and more sustainable energy. The present work presents a new generation of power electronic devices and systems, which includes the following main aspects: advances in semiconductor technologies, such as the use of silicon carbide (SiC) and gallium nitride (GaN); nanomaterials for the realization of magnetic components; using a modular principle to construct power electronic devices; applying artificial intelligence techniques to device lifecycle design; and the environmental aspects of design. The new materials allow the devices to operate at higher voltages, temperatures and frequencies, making them ideal for high-power applications and high-frequency operation. In addition, the development of integrated and modular power electronic systems that combine energy management, diagnostics and communication capabilities contributes to the more intelligent and efficient management of energy resources. This includes integration with the Internet of Things (IoT) and artificial intelligence (AI) for automated task solving and work optimization.

    ]]>
    Smart Energy Systems Based on Next-Generation Power Electronic Devices Nikolay Hinov doi: 10.3390/technologies12060078 Technologies 2024-06-01 Technologies 2024-06-01 12 6
    Article
    78 10.3390/technologies12060078 https://www.mdpi.com/2227-7080/12/6/78
    Technologies, Vol. 12, Pages 77: Deep Learning Approaches for Water Stress Forecasting in Arboriculture Using Time Series of Remote Sensing Images: Comparative Study between ConvLSTM and CNN-LSTM Models https://www.mdpi.com/2227-7080/12/6/77 Irrigation is crucial for crop cultivation and productivity. However, traditional methods often waste water and energy due to neglecting soil and crop variations, leading to inefficient water distribution and potential crop water stress. The crop water stress index (CWSI) has become a widely accepted index for assessing plant water status. However, it is necessary to forecast the plant water stress to estimate the quantity of water to irrigate. Deep learning (DL) models for water stress forecasting have gained prominence in irrigation management to address these needs. In this paper, we present a comparative study between two deep learning models, ConvLSTM and CNN-LSTM, for water stress forecasting using remote sensing data. While these DL architectures have been previously proposed and studied in various applications, our novelty lies in studying their effectiveness in the field of water stress forecasting using time series of remote sensing images. The proposed methodology involves meticulous preparation of time series data, where we calculate the crop water stress index (CWSI) using Landsat 8 satellite imagery through Google Earth Engine. Subsequently, we implemented and fine-tuned the hyperparameters of the ConvLSTM and CNN-LSTM models. The same processes of model compilation, optimization of hyperparameters, and model training were applied for the two architectures. A citrus farm in Morocco was chosen as a case study. The analysis of the results reveals that the CNN-LSTM model excels over the ConvLSTM model for long sequences (nine images) with an RMSE of 0.119 and 0.123, respectively, while ConvLSTM provides better results for short sequences (three images) than CNN-LSTM with an RMSE of 0.153 and 0.187, respectively. 2024-06-01 Technologies, Vol. 12, Pages 77: Deep Learning Approaches for Water Stress Forecasting in Arboriculture Using Time Series of Remote Sensing Images: Comparative Study between ConvLSTM and CNN-LSTM Models

    Technologies doi: 10.3390/technologies12060077

    Authors: Ismail Bounoua Youssef Saidi Reda Yaagoubi Mourad Bouziani

    Irrigation is crucial for crop cultivation and productivity. However, traditional methods often waste water and energy due to neglecting soil and crop variations, leading to inefficient water distribution and potential crop water stress. The crop water stress index (CWSI) has become a widely accepted index for assessing plant water status. However, it is necessary to forecast the plant water stress to estimate the quantity of water to irrigate. Deep learning (DL) models for water stress forecasting have gained prominence in irrigation management to address these needs. In this paper, we present a comparative study between two deep learning models, ConvLSTM and CNN-LSTM, for water stress forecasting using remote sensing data. While these DL architectures have been previously proposed and studied in various applications, our novelty lies in studying their effectiveness in the field of water stress forecasting using time series of remote sensing images. The proposed methodology involves meticulous preparation of time series data, where we calculate the crop water stress index (CWSI) using Landsat 8 satellite imagery through Google Earth Engine. Subsequently, we implemented and fine-tuned the hyperparameters of the ConvLSTM and CNN-LSTM models. The same processes of model compilation, optimization of hyperparameters, and model training were applied for the two architectures. A citrus farm in Morocco was chosen as a case study. The analysis of the results reveals that the CNN-LSTM model excels over the ConvLSTM model for long sequences (nine images) with an RMSE of 0.119 and 0.123, respectively, while ConvLSTM provides better results for short sequences (three images) than CNN-LSTM with an RMSE of 0.153 and 0.187, respectively.

    ]]>
    Deep Learning Approaches for Water Stress Forecasting in Arboriculture Using Time Series of Remote Sensing Images: Comparative Study between ConvLSTM and CNN-LSTM Models Ismail Bounoua Youssef Saidi Reda Yaagoubi Mourad Bouziani doi: 10.3390/technologies12060077 Technologies 2024-06-01 Technologies 2024-06-01 12 6
    Article
    77 10.3390/technologies12060077 https://www.mdpi.com/2227-7080/12/6/77
    Technologies, Vol. 12, Pages 76: Vertical Balance of an Autonomous Two-Wheeled Single-Track Electric Vehicle https://www.mdpi.com/2227-7080/12/6/76 In the dynamic landscape of autonomous transport, the integration of intelligent transport systems and embedded control technology is pivotal. While strides have been made in the development of autonomous agents and multi-agent systems, the unique challenges posed by two-wheeled vehicles remain largely unaddressed. Dedicated control strategies for these vehicles have yet to be developed. The vertical balance of an autonomous two-wheeled single-track vehicle is a challenge for engineering. This type of vehicle is unstable and its dynamic behaviour changes with the forward velocity. We designed a scheduled-gain proportional–integral controller that adapts its gains to the forward velocity, maintaining the vertical balance of the vehicle by means of the steering front-wheel angle. The control law was tested with a prototype designed by the authors under different scenarios, smooth and uneven floors, maintaining the vertical balance in all cases. 2024-05-28 Technologies, Vol. 12, Pages 76: Vertical Balance of an Autonomous Two-Wheeled Single-Track Electric Vehicle

    Technologies doi: 10.3390/technologies12060076

    Authors: David Rodríguez-Rosa Andrea Martín-Parra Andrés García-Vanegas Francisco Moya-Fernández Ismael Payo-Gutiérrez Fernando J. Castillo-García

    In the dynamic landscape of autonomous transport, the integration of intelligent transport systems and embedded control technology is pivotal. While strides have been made in the development of autonomous agents and multi-agent systems, the unique challenges posed by two-wheeled vehicles remain largely unaddressed. Dedicated control strategies for these vehicles have yet to be developed. The vertical balance of an autonomous two-wheeled single-track vehicle is a challenge for engineering. This type of vehicle is unstable and its dynamic behaviour changes with the forward velocity. We designed a scheduled-gain proportional–integral controller that adapts its gains to the forward velocity, maintaining the vertical balance of the vehicle by means of the steering front-wheel angle. The control law was tested with a prototype designed by the authors under different scenarios, smooth and uneven floors, maintaining the vertical balance in all cases.

    ]]>
    Vertical Balance of an Autonomous Two-Wheeled Single-Track Electric Vehicle David Rodríguez-Rosa Andrea Martín-Parra Andrés García-Vanegas Francisco Moya-Fernández Ismael Payo-Gutiérrez Fernando J. Castillo-García doi: 10.3390/technologies12060076 Technologies 2024-05-28 Technologies 2024-05-28 12 6
    Article
    76 10.3390/technologies12060076 https://www.mdpi.com/2227-7080/12/6/76
    Technologies, Vol. 12, Pages 75: Intelligent Cane for Assisting the Visually Impaired https://www.mdpi.com/2227-7080/12/6/75 Those with visual impairments, including complete blindness or partial sight loss, constitute a significant global population. According to estimates by the World Health Organization (WHO), there are at least 2.2 billion people worldwide who have near or distance vision disorders. Addressing their needs is crucial. Introducing a smart cane tailored for the blind can greatly improve their daily lives. This paper introduces a significant technical innovation, presenting a smart cane equipped with dual ultrasonic sensors for obstacle detection, catering to the visually impaired. The primary focus is on developing a versatile device capable of operating in diverse conditions, ensuring efficient obstacle alerts. The strategic placement of ultrasonic sensors facilitates the emission and measurement of high-frequency sound waves, calculating obstacle distances and assessing potential threats to the user. Addressing various obstacle types, two ultrasonic sensors handle overhead and ground-level barriers, ensuring precise warnings. With a detection range spanning 2 to 400 cm, the device provides timely information for user reaction. Dual alert methods, including vibrations and audio signals, offer flexibility to users, controlled through intuitive switches. Additionally, a Bluetooth-connected mobile app enhances functionality, activating audio alerts if the cane is misplaced or too distant. Cost-effective implementation enhances accessibility, supporting a broader user base. This innovative smart cane not only represents a technical achievement but also significantly improves the quality of life for visually impaired individuals, emphasizing the social impact of technology. The research underscores the importance of technological research in addressing societal challenges and highlights the need for solutions that positively impact vulnerable communities, shaping future directions in research and technological development. 2024-05-27 Technologies, Vol. 12, Pages 75: Intelligent Cane for Assisting the Visually Impaired

    Technologies doi: 10.3390/technologies12060075

    Authors: Claudiu-Eugen Panazan Eva-Henrietta Dulf

    Those with visual impairments, including complete blindness or partial sight loss, constitute a significant global population. According to estimates by the World Health Organization (WHO), there are at least 2.2 billion people worldwide who have near or distance vision disorders. Addressing their needs is crucial. Introducing a smart cane tailored for the blind can greatly improve their daily lives. This paper introduces a significant technical innovation, presenting a smart cane equipped with dual ultrasonic sensors for obstacle detection, catering to the visually impaired. The primary focus is on developing a versatile device capable of operating in diverse conditions, ensuring efficient obstacle alerts. The strategic placement of ultrasonic sensors facilitates the emission and measurement of high-frequency sound waves, calculating obstacle distances and assessing potential threats to the user. Addressing various obstacle types, two ultrasonic sensors handle overhead and ground-level barriers, ensuring precise warnings. With a detection range spanning 2 to 400 cm, the device provides timely information for user reaction. Dual alert methods, including vibrations and audio signals, offer flexibility to users, controlled through intuitive switches. Additionally, a Bluetooth-connected mobile app enhances functionality, activating audio alerts if the cane is misplaced or too distant. Cost-effective implementation enhances accessibility, supporting a broader user base. This innovative smart cane not only represents a technical achievement but also significantly improves the quality of life for visually impaired individuals, emphasizing the social impact of technology. The research underscores the importance of technological research in addressing societal challenges and highlights the need for solutions that positively impact vulnerable communities, shaping future directions in research and technological development.

    ]]>
    Intelligent Cane for Assisting the Visually Impaired Claudiu-Eugen Panazan Eva-Henrietta Dulf doi: 10.3390/technologies12060075 Technologies 2024-05-27 Technologies 2024-05-27 12 6
    Article
    75 10.3390/technologies12060075 https://www.mdpi.com/2227-7080/12/6/75
    Technologies, Vol. 12, Pages 74: Effect of Oscillating Area on Generating Microbubbles from Hollow Ultrasonic Horn https://www.mdpi.com/2227-7080/12/6/74 Microbubbles, which are tiny bubbles with a diameter of less than 100 µm, have been attracting attention in recent years. Conventional methods of microbubble generation using porous material and swirling flows have problems such as large equipment size and non-uniform bubble generation. Therefore, we have been developing a hollow ultrasonic horn with an internal flow path as a microbubble-generating device. By supplying gas and ultrasonic waves simultaneously, the gas–liquid interface is violently disturbed to generate microbubbles. Although this device can generate microbubbles even in highly viscous fluids and high-temperature fluids such as molten metals, it has the problem of generating many relatively large bubbles of 1 mm or more. Since the generation of a large amount of microbubbles in a short period of time is required to realize actual applications in agriculture, aquaculture, and medicine, conventional research has tried to solve this problem by increasing the amplitude of the ultrasonic oscillation. However, it is difficult to further increase the amplitude due to the structural reasons of the horn and the behavior of bubbles at the horn tip; therefore, the oscillating area of the tip of the horn, which had not received attention before, was enlarged by a factor of 2.94 times to facilitate the ultrasonic wave transmission to the bubbles, and the effect of this was investigated. As a result, a large number of gases were miniaturized, especially at high gas flow rates, leading to an increase in the amount of microbubbles generated. 2024-05-25 Technologies, Vol. 12, Pages 74: Effect of Oscillating Area on Generating Microbubbles from Hollow Ultrasonic Horn

    Technologies doi: 10.3390/technologies12060074

    Authors: Kodai Hasegawa Nobuhiro Yabuki Toshinori Makuta

    Microbubbles, which are tiny bubbles with a diameter of less than 100 µm, have been attracting attention in recent years. Conventional methods of microbubble generation using porous material and swirling flows have problems such as large equipment size and non-uniform bubble generation. Therefore, we have been developing a hollow ultrasonic horn with an internal flow path as a microbubble-generating device. By supplying gas and ultrasonic waves simultaneously, the gas–liquid interface is violently disturbed to generate microbubbles. Although this device can generate microbubbles even in highly viscous fluids and high-temperature fluids such as molten metals, it has the problem of generating many relatively large bubbles of 1 mm or more. Since the generation of a large amount of microbubbles in a short period of time is required to realize actual applications in agriculture, aquaculture, and medicine, conventional research has tried to solve this problem by increasing the amplitude of the ultrasonic oscillation. However, it is difficult to further increase the amplitude due to the structural reasons of the horn and the behavior of bubbles at the horn tip; therefore, the oscillating area of the tip of the horn, which had not received attention before, was enlarged by a factor of 2.94 times to facilitate the ultrasonic wave transmission to the bubbles, and the effect of this was investigated. As a result, a large number of gases were miniaturized, especially at high gas flow rates, leading to an increase in the amount of microbubbles generated.

    ]]>
    Effect of Oscillating Area on Generating Microbubbles from Hollow Ultrasonic Horn Kodai Hasegawa Nobuhiro Yabuki Toshinori Makuta doi: 10.3390/technologies12060074 Technologies 2024-05-25 Technologies 2024-05-25 12 6
    Article
    74 10.3390/technologies12060074 https://www.mdpi.com/2227-7080/12/6/74
    Technologies, Vol. 12, Pages 73: Gamified VR Storytelling for Cultural Tourism Using 3D Reconstructions, Virtual Humans, and 360° Videos https://www.mdpi.com/2227-7080/12/6/73 This work addresses the lack of methodologies for the seamless integration of 360° videos, 3D digitized artifacts, and virtual human agents within a virtual reality environment. The proposed methodology is showcased in the context of a tour guide application and centers around the innovative use of a central hub, metaphorically linking users to various historical locations. Leveraging a treasure hunt metaphor and a storytelling approach, this combination of digital structures is capable of building an exploratory learning experience. Virtual human agents contribute to the scenario by offering personalized narratives and educational content, contributing to an enriched cultural heritage journey. Key contributions of this research include the exploration of the symbolic use of the central hub, the application of a gamified approach through the treasure hunt metaphor, and the seamless integration of various technologies to enhance user engagement. This work contributes to the understanding of context-specific cultural heritage applications and their potential impact on cultural tourism. The output of this research work is the reusable methodology and its demonstration in the implemented showcase application that was assessed by a heuristic evaluation. 2024-05-22 Technologies, Vol. 12, Pages 73: Gamified VR Storytelling for Cultural Tourism Using 3D Reconstructions, Virtual Humans, and 360° Videos

    Technologies doi: 10.3390/technologies12060073

    Authors: Emmanouil Kontogiorgakis Emmanouil Zidianakis Eirini Kontaki Nikolaos Partarakis Constantina Manoli Stavroula Ntoa Constantine Stephanidis

    This work addresses the lack of methodologies for the seamless integration of 360° videos, 3D digitized artifacts, and virtual human agents within a virtual reality environment. The proposed methodology is showcased in the context of a tour guide application and centers around the innovative use of a central hub, metaphorically linking users to various historical locations. Leveraging a treasure hunt metaphor and a storytelling approach, this combination of digital structures is capable of building an exploratory learning experience. Virtual human agents contribute to the scenario by offering personalized narratives and educational content, contributing to an enriched cultural heritage journey. Key contributions of this research include the exploration of the symbolic use of the central hub, the application of a gamified approach through the treasure hunt metaphor, and the seamless integration of various technologies to enhance user engagement. This work contributes to the understanding of context-specific cultural heritage applications and their potential impact on cultural tourism. The output of this research work is the reusable methodology and its demonstration in the implemented showcase application that was assessed by a heuristic evaluation.

    ]]>
    Gamified VR Storytelling for Cultural Tourism Using 3D Reconstructions, Virtual Humans, and 360° Videos Emmanouil Kontogiorgakis Emmanouil Zidianakis Eirini Kontaki Nikolaos Partarakis Constantina Manoli Stavroula Ntoa Constantine Stephanidis doi: 10.3390/technologies12060073 Technologies 2024-05-22 Technologies 2024-05-22 12 6
    Article
    73 10.3390/technologies12060073 https://www.mdpi.com/2227-7080/12/6/73
    Technologies, Vol. 12, Pages 72: A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education https://www.mdpi.com/2227-7080/12/5/72 Machine learning (ML) is enabling augmented reality (AR) to gain popularity in various fields, including gaming, entertainment, healthcare, and education. ML enhances AR applications in education by providing accurate visualizations of objects. For AR systems, ML algorithms facilitate the recognition of objects and gestures from kindergarten through university. The purpose of this survey is to provide an overview of various ways in which ML techniques can be applied within the field of AR within education. The first step is to describe the background of AR. In the next step, we discuss the ML models that are used in AR education applications. Additionally, we discuss how ML is used in AR. Each subgroup’s challenges and solutions can be identified by analyzing these frameworks. In addition, we outline several research gaps and future research directions in ML-based AR frameworks for education. 2024-05-19 Technologies, Vol. 12, Pages 72: A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education

    Technologies doi: 10.3390/technologies12050072

    Authors: Haseeb Ali Khan Sonain Jamil Md. Jalil Piran Oh-Jin Kwon Jong-Weon Lee

    Machine learning (ML) is enabling augmented reality (AR) to gain popularity in various fields, including gaming, entertainment, healthcare, and education. ML enhances AR applications in education by providing accurate visualizations of objects. For AR systems, ML algorithms facilitate the recognition of objects and gestures from kindergarten through university. The purpose of this survey is to provide an overview of various ways in which ML techniques can be applied within the field of AR within education. The first step is to describe the background of AR. In the next step, we discuss the ML models that are used in AR education applications. Additionally, we discuss how ML is used in AR. Each subgroup’s challenges and solutions can be identified by analyzing these frameworks. In addition, we outline several research gaps and future research directions in ML-based AR frameworks for education.

    ]]>
    A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education Haseeb Ali Khan Sonain Jamil Md. Jalil Piran Oh-Jin Kwon Jong-Weon Lee doi: 10.3390/technologies12050072 Technologies 2024-05-19 Technologies 2024-05-19 12 5
    Review
    72 10.3390/technologies12050072 https://www.mdpi.com/2227-7080/12/5/72
    Technologies, Vol. 12, Pages 71: Analysis, Evaluation, and Future Directions on Multimodal Deception Detection https://www.mdpi.com/2227-7080/12/5/71 Multimodal deception detection has received increasing attention from the scientific community in recent years, mainly due to growing ethical and security issues, as well as the growing use of digital media. A great number of deception detection methods have been proposed in several domains, such as political elections, security contexts, and job interviews. However, a systematic analysis of the current situation and the evaluation and future directions of deception detection based on cues coming from multiple modalities seems to be lacking. This paper, starting from a description of methods and metrics used for the analysis and evaluation of multimodal deception detection on video, provides a vision of future directions in this field. For the analysis, the PRISMA recommendations are followed, which allow the collection and synthesis of all the available research on the topic and the extraction of information on the multimodal features, the fusion methods, the classification approaches, the evaluation datasets, and metrics. The results of this analysis contribute to the assessment of the state of the art and the evaluation of evidence on important research questions in multimodal deceptive deception. Moreover, they provide guidance on future research in the field. 2024-05-18 Technologies, Vol. 12, Pages 71: Analysis, Evaluation, and Future Directions on Multimodal Deception Detection

    Technologies doi: 10.3390/technologies12050071

    Authors: Arianna D’Ulizia Alessia D’Andrea Patrizia Grifoni Fernando Ferri

    Multimodal deception detection has received increasing attention from the scientific community in recent years, mainly due to growing ethical and security issues, as well as the growing use of digital media. A great number of deception detection methods have been proposed in several domains, such as political elections, security contexts, and job interviews. However, a systematic analysis of the current situation and the evaluation and future directions of deception detection based on cues coming from multiple modalities seems to be lacking. This paper, starting from a description of methods and metrics used for the analysis and evaluation of multimodal deception detection on video, provides a vision of future directions in this field. For the analysis, the PRISMA recommendations are followed, which allow the collection and synthesis of all the available research on the topic and the extraction of information on the multimodal features, the fusion methods, the classification approaches, the evaluation datasets, and metrics. The results of this analysis contribute to the assessment of the state of the art and the evaluation of evidence on important research questions in multimodal deceptive deception. Moreover, they provide guidance on future research in the field.

    ]]>
    Analysis, Evaluation, and Future Directions on Multimodal Deception Detection Arianna D’Ulizia Alessia D’Andrea Patrizia Grifoni Fernando Ferri doi: 10.3390/technologies12050071 Technologies 2024-05-18 Technologies 2024-05-18 12 5
    Perspective
    71 10.3390/technologies12050071 https://www.mdpi.com/2227-7080/12/5/71
    Technologies, Vol. 12, Pages 70: Speckle Plethysmograph-Based Blood Pressure Assessment https://www.mdpi.com/2227-7080/12/5/70 Continuous non-invasive blood pressure (CNBP) monitoring is of the utmost importance in detecting and managing hypertension, a leading cause of death in the United States. Extensive research has delved into pioneering methods for predicting systolic and diastolic blood pressure values by leveraging pulse arrival time (PAT), the time difference between the proximal and distal signal peaks. The most widely employed pairing involves electrocardiography (ECG) and photoplethysmography (PPG). Possessing similar characteristics in terms of measuring blood flow changes, a recently investigated optical signal known as speckleplethysmography (SPG) showed its stability and high signal-to-noise ratio compared with PPG. Thus, SPG is a potential surrogate to pair with ECG for CNBP estimation. The present study aims to unlock the untapped potential of SPG as a signal for non-invasive blood pressure monitoring based on PAT. To ascertain SPG’s capabilities, eight subjects were enrolled in multiple recording sessions. A third-party device was employed for ECG and PPG measurements, while a commercial device served as the reference for arterial blood pressure (ABP). SPG measurements were obtained using a prototype smartphone-based system. Following the completion of three scenarios—sitting, walking, and running—the subjects’ signals and ABP were recorded to investigate the predictive capacity of systolic blood pressure. The collected data were processed and prepared for machine learning models, including support vector regression and decision tree regression. The models’ effectiveness was evaluated using root-mean-square error and mean absolute percentage error. In most instances, predictions utilizing PATSPG exhibited comparable or superior performance to PATPPG (i.e., SPG Rest ± 12.4 mmHg vs. PPG Rest ± 13.7 mmHg for RSME, and SPG 8% vs. PPG 9% for MAPE). Furthermore, incorporating an additional feature, namely the previous SBP value, resulted in reduced prediction errors for both signals in multiple model configurations (i.e., SPG Rest ± 12.4 mmHg to ±3.7 mmHg for RSME, and SPG Rest 8% to 3% for MAPE). These preliminary tests of SPG underscore the remarkable potential of this novel signal in PAT-based blood pressure predictions. Subsequent studies involving a larger cohort of test subjects and advancements in the SPG acquisition system hold promise for further improving the effectiveness of this newly explored signal in blood pressure monitoring. 2024-05-18 Technologies, Vol. 12, Pages 70: Speckle Plethysmograph-Based Blood Pressure Assessment

    Technologies doi: 10.3390/technologies12050070

    Authors: Floranne T. Ellington Anh Nguyen Mao-Hsiang Huang Tai Le Bernard Choi Hung Cao

    Continuous non-invasive blood pressure (CNBP) monitoring is of the utmost importance in detecting and managing hypertension, a leading cause of death in the United States. Extensive research has delved into pioneering methods for predicting systolic and diastolic blood pressure values by leveraging pulse arrival time (PAT), the time difference between the proximal and distal signal peaks. The most widely employed pairing involves electrocardiography (ECG) and photoplethysmography (PPG). Possessing similar characteristics in terms of measuring blood flow changes, a recently investigated optical signal known as speckleplethysmography (SPG) showed its stability and high signal-to-noise ratio compared with PPG. Thus, SPG is a potential surrogate to pair with ECG for CNBP estimation. The present study aims to unlock the untapped potential of SPG as a signal for non-invasive blood pressure monitoring based on PAT. To ascertain SPG’s capabilities, eight subjects were enrolled in multiple recording sessions. A third-party device was employed for ECG and PPG measurements, while a commercial device served as the reference for arterial blood pressure (ABP). SPG measurements were obtained using a prototype smartphone-based system. Following the completion of three scenarios—sitting, walking, and running—the subjects’ signals and ABP were recorded to investigate the predictive capacity of systolic blood pressure. The collected data were processed and prepared for machine learning models, including support vector regression and decision tree regression. The models’ effectiveness was evaluated using root-mean-square error and mean absolute percentage error. In most instances, predictions utilizing PATSPG exhibited comparable or superior performance to PATPPG (i.e., SPG Rest ± 12.4 mmHg vs. PPG Rest ± 13.7 mmHg for RSME, and SPG 8% vs. PPG 9% for MAPE). Furthermore, incorporating an additional feature, namely the previous SBP value, resulted in reduced prediction errors for both signals in multiple model configurations (i.e., SPG Rest ± 12.4 mmHg to ±3.7 mmHg for RSME, and SPG Rest 8% to 3% for MAPE). These preliminary tests of SPG underscore the remarkable potential of this novel signal in PAT-based blood pressure predictions. Subsequent studies involving a larger cohort of test subjects and advancements in the SPG acquisition system hold promise for further improving the effectiveness of this newly explored signal in blood pressure monitoring.

    ]]>
    Speckle Plethysmograph-Based Blood Pressure Assessment Floranne T. Ellington Anh Nguyen Mao-Hsiang Huang Tai Le Bernard Choi Hung Cao doi: 10.3390/technologies12050070 Technologies 2024-05-18 Technologies 2024-05-18 12 5
    Article
    70 10.3390/technologies12050070 https://www.mdpi.com/2227-7080/12/5/70
    Technologies, Vol. 12, Pages 69: Evaluating a Controlled Electromagnetic Launcher for Safe Remote Drug Delivery https://www.mdpi.com/2227-7080/12/5/69 Biologists and veterinarians rely on dart projectors to inject animals with drugs, take biopsies from specimens, or inject tracking chips. Firearms, air guns, and other launchers are limited in their ability to precisely control the kinetic energy of a projectile, which can injure the animal if too high. In order to improve the safety of remote drug delivery, a lidar-modulated electromagnetic launcher and a soft drug delivery dart were prototyped. A single-stage revolver coilgun and soft dart were designed and tested at distances up to 8 m. With a coil efficiency of 2.25%, the launcher could consistently deliver a projectile at a controlled kinetic energy of 1.00 ± 0.006 J and an uncontrolled kinetic energy of 2.66 ± 0.076 J. Although modifications to charging time, sensors, and electronics could improve performance, our launcher performed at the required level at the necessary distances. The precision achieved with commercial components enables many other applications, from law enforcement to manufacturing. 2024-05-17 Technologies, Vol. 12, Pages 69: Evaluating a Controlled Electromagnetic Launcher for Safe Remote Drug Delivery

    Technologies doi: 10.3390/technologies12050069

    Authors: John LaRocco Qudsia Tahmina John Simonis

    Biologists and veterinarians rely on dart projectors to inject animals with drugs, take biopsies from specimens, or inject tracking chips. Firearms, air guns, and other launchers are limited in their ability to precisely control the kinetic energy of a projectile, which can injure the animal if too high. In order to improve the safety of remote drug delivery, a lidar-modulated electromagnetic launcher and a soft drug delivery dart were prototyped. A single-stage revolver coilgun and soft dart were designed and tested at distances up to 8 m. With a coil efficiency of 2.25%, the launcher could consistently deliver a projectile at a controlled kinetic energy of 1.00 ± 0.006 J and an uncontrolled kinetic energy of 2.66 ± 0.076 J. Although modifications to charging time, sensors, and electronics could improve performance, our launcher performed at the required level at the necessary distances. The precision achieved with commercial components enables many other applications, from law enforcement to manufacturing.

    ]]>
    Evaluating a Controlled Electromagnetic Launcher for Safe Remote Drug Delivery John LaRocco Qudsia Tahmina John Simonis doi: 10.3390/technologies12050069 Technologies 2024-05-17 Technologies 2024-05-17 12 5
    Article
    69 10.3390/technologies12050069 https://www.mdpi.com/2227-7080/12/5/69
    Technologies, Vol. 12, Pages 68: Application and Challenges of the Technology Acceptance Model in Elderly Healthcare: Insights from ChatGPT https://www.mdpi.com/2227-7080/12/5/68 The Technology Acceptance Model (TAM) plays a pivotal role in elderly healthcare, serving as a theoretical framework. This study aimed to identify TAM’s core components, practical applications, challenges arising from its applications, and propose countermeasures in elderly healthcare. This descriptive study was conducted by utilizing OpenAI’s ChatGPT, with an access date of 10 January 2024. The three open-ended questions administered to ChatGPT and its responses were collected and qualitatively evaluated for reliability through previous studies. The core components of TAMs were identified as perceived usefulness, perceived ease of use, attitude toward use, behavioral intention to use, subjective norms, image, and facilitating conditions. TAM’s application areas span various technologies in elderly healthcare, such as telehealth, wearable devices, mobile health apps, and more. Challenges arising from TAM applications include technological literacy barriers, digital divide concerns, privacy and security apprehensions, resistance to change, limited awareness and information, health conditions and cognitive impairment, trust and reliability concerns, a lack of tailored interventions, overcoming age stereotypes, and integration with traditional healthcare. In conclusion, customized interventions are crucial for successful tech acceptance among the elderly population. The findings of this study are expected to enhance understanding of elderly healthcare and technology adoption, with insights gained through natural language processing models like ChatGPT anticipated to provide a fresh perspective. 2024-05-13 Technologies, Vol. 12, Pages 68: Application and Challenges of the Technology Acceptance Model in Elderly Healthcare: Insights from ChatGPT

    Technologies doi: 10.3390/technologies12050068

    Authors: Sang Dol Kim

    The Technology Acceptance Model (TAM) plays a pivotal role in elderly healthcare, serving as a theoretical framework. This study aimed to identify TAM’s core components, practical applications, challenges arising from its applications, and propose countermeasures in elderly healthcare. This descriptive study was conducted by utilizing OpenAI’s ChatGPT, with an access date of 10 January 2024. The three open-ended questions administered to ChatGPT and its responses were collected and qualitatively evaluated for reliability through previous studies. The core components of TAMs were identified as perceived usefulness, perceived ease of use, attitude toward use, behavioral intention to use, subjective norms, image, and facilitating conditions. TAM’s application areas span various technologies in elderly healthcare, such as telehealth, wearable devices, mobile health apps, and more. Challenges arising from TAM applications include technological literacy barriers, digital divide concerns, privacy and security apprehensions, resistance to change, limited awareness and information, health conditions and cognitive impairment, trust and reliability concerns, a lack of tailored interventions, overcoming age stereotypes, and integration with traditional healthcare. In conclusion, customized interventions are crucial for successful tech acceptance among the elderly population. The findings of this study are expected to enhance understanding of elderly healthcare and technology adoption, with insights gained through natural language processing models like ChatGPT anticipated to provide a fresh perspective.

    ]]>
    Application and Challenges of the Technology Acceptance Model in Elderly Healthcare: Insights from ChatGPT Sang Dol Kim doi: 10.3390/technologies12050068 Technologies 2024-05-13 Technologies 2024-05-13 12 5
    Article
    68 10.3390/technologies12050068 https://www.mdpi.com/2227-7080/12/5/68
    Technologies, Vol. 12, Pages 67: Study of an LLC Converter for Thermoelectric Waste Heat Recovery Integration in Shipboard Microgrids https://www.mdpi.com/2227-7080/12/5/67 Static waste heat recovery, by means of thermoelectric generator (TEG) modules, constitutes a fast-growing energy harvesting technology on the way towards greener transportation. Many commercial solutions are already available for small internal combustion engine (ICE) vehicles, whereas further development and cost reductions of TEG devices expand their applicability at higher-power transportation means (i.e., ships and aircrafts). In this light, the integration of waste heat recovery based on TEG modules in a shipboard distribution network is studied in this work. Several voltage step-up techniques are considered, whereas the most suitable ones are assessed via the LTspice simulation platform. The design procedure of the selected LLC resonant converter is presented and analyzed in detail. Furthermore, a flexible control strategy is proposed, capable of either output voltage regulation (constant voltage) or maximum power point tracking (MPPT), according to the application demands. Finally, both simulations and experiments (on a suitable laboratory testbench) are performed. The obtained measurements indicate the high efficiency that can be achieved with the LLC converter for a wide operating area as well as the functionality and adequate performance of the control scheme in both operating conditions. 2024-05-11 Technologies, Vol. 12, Pages 67: Study of an LLC Converter for Thermoelectric Waste Heat Recovery Integration in Shipboard Microgrids

    Technologies doi: 10.3390/technologies12050067

    Authors: Nick Rigogiannis Ioannis Roussos Christos Pechlivanis Ioannis Bogatsis Anastasios Kyritsis Nick Papanikolaou Michael Loupis

    Static waste heat recovery, by means of thermoelectric generator (TEG) modules, constitutes a fast-growing energy harvesting technology on the way towards greener transportation. Many commercial solutions are already available for small internal combustion engine (ICE) vehicles, whereas further development and cost reductions of TEG devices expand their applicability at higher-power transportation means (i.e., ships and aircrafts). In this light, the integration of waste heat recovery based on TEG modules in a shipboard distribution network is studied in this work. Several voltage step-up techniques are considered, whereas the most suitable ones are assessed via the LTspice simulation platform. The design procedure of the selected LLC resonant converter is presented and analyzed in detail. Furthermore, a flexible control strategy is proposed, capable of either output voltage regulation (constant voltage) or maximum power point tracking (MPPT), according to the application demands. Finally, both simulations and experiments (on a suitable laboratory testbench) are performed. The obtained measurements indicate the high efficiency that can be achieved with the LLC converter for a wide operating area as well as the functionality and adequate performance of the control scheme in both operating conditions.

    ]]>
    Study of an LLC Converter for Thermoelectric Waste Heat Recovery Integration in Shipboard Microgrids Nick Rigogiannis Ioannis Roussos Christos Pechlivanis Ioannis Bogatsis Anastasios Kyritsis Nick Papanikolaou Michael Loupis doi: 10.3390/technologies12050067 Technologies 2024-05-11 Technologies 2024-05-11 12 5
    Article
    67 10.3390/technologies12050067 https://www.mdpi.com/2227-7080/12/5/67
    Technologies, Vol. 12, Pages 66: Converging Artificial Intelligence and Quantum Technologies: Accelerated Growth Effects in Technological Evolution https://www.mdpi.com/2227-7080/12/5/66 One of the fundamental problems in the field of technological studies is to clarify the drivers and dynamics of technological evolution for sustaining industrial and economic change. This study confronts the problem by analyzing the converging technologies to explain effects on the evolutionary dynamics over time. This paper focuses on technological interaction between artificial intelligence and quantum technologies using a technometric model of technological evolution based on scientific and technological information (publications and patents). Findings show that quantum technology has a growth rate of 1.07, artificial intelligence technology has a rate of growth of 1.37, whereas the technological interaction of converging quantum and artificial intelligence technologies has an accelerated rate of growth of 1.58, higher than trends of these technologies taken individually. These findings suggest that technological interaction is one of the fundamental determinants in the rapid evolution of path-breaking technologies and disruptive innovations. The deductive implications of results about the effects of converging technologies are: (a) accelerated evolutionary growth; (b) a disproportionate (allometric) growth of patents driven by publications supporting a fast technological evolution. Our results support policy and managerial implications for the decision making of policymakers, technology analysts, and R&D managers that can direct R&D investments towards fruitful inter-relationships between radical technologies to foster scientific and technological change with positive societal and economic impcats. 2024-05-10 Technologies, Vol. 12, Pages 66: Converging Artificial Intelligence and Quantum Technologies: Accelerated Growth Effects in Technological Evolution

    Technologies doi: 10.3390/technologies12050066

    Authors: Mario Coccia

    One of the fundamental problems in the field of technological studies is to clarify the drivers and dynamics of technological evolution for sustaining industrial and economic change. This study confronts the problem by analyzing the converging technologies to explain effects on the evolutionary dynamics over time. This paper focuses on technological interaction between artificial intelligence and quantum technologies using a technometric model of technological evolution based on scientific and technological information (publications and patents). Findings show that quantum technology has a growth rate of 1.07, artificial intelligence technology has a rate of growth of 1.37, whereas the technological interaction of converging quantum and artificial intelligence technologies has an accelerated rate of growth of 1.58, higher than trends of these technologies taken individually. These findings suggest that technological interaction is one of the fundamental determinants in the rapid evolution of path-breaking technologies and disruptive innovations. The deductive implications of results about the effects of converging technologies are: (a) accelerated evolutionary growth; (b) a disproportionate (allometric) growth of patents driven by publications supporting a fast technological evolution. Our results support policy and managerial implications for the decision making of policymakers, technology analysts, and R&D managers that can direct R&D investments towards fruitful inter-relationships between radical technologies to foster scientific and technological change with positive societal and economic impcats.

    ]]>
    Converging Artificial Intelligence and Quantum Technologies: Accelerated Growth Effects in Technological Evolution Mario Coccia doi: 10.3390/technologies12050066 Technologies 2024-05-10 Technologies 2024-05-10 12 5
    Article
    66 10.3390/technologies12050066 https://www.mdpi.com/2227-7080/12/5/66
    Technologies, Vol. 12, Pages 65: Fluorine-Free Single-Component Polyelectrolyte of Poly(ethylene glycol) Bearing Lithium Methanesulfonylsulfonimide Terminal Groups: Effect of Structural Variance on Ionic Conductivity https://www.mdpi.com/2227-7080/12/5/65 Fluorine-free single-component polyelectrolytes were developed via the hybridization of lithium methanesulfonylsulfonimide (LiMSSI) moieties to poly(ethylene glycol) (PEG) derivatives with different morphologies, and the relationship between the structure and its ionic conductivity was investigated. The PEG-LiMSSI derivatives with one, two, and three LiMSSI end groups were prepared via the concomitant Michael-type addition and lithiation of PEGs and N-methanesulfonylvinylsulfonimide. The ionic conductivity at 60 °C ranged from 1.8 × 10−7 to 2.0 × 10−4 S/cm. PEG-LiMSSI derivatives with one LiMSSI terminus and with two LiMSSI termini at both ends show higher ionic conductivity, that is as good as fluorine-free single-component polyelectrolytes, than that with two LiMSSI termini at one end and that with three LiMSSI termini. 2024-05-09 Technologies, Vol. 12, Pages 65: Fluorine-Free Single-Component Polyelectrolyte of Poly(ethylene glycol) Bearing Lithium Methanesulfonylsulfonimide Terminal Groups: Effect of Structural Variance on Ionic Conductivity

    Technologies doi: 10.3390/technologies12050065

    Authors: Bungo Ochiai Koki Hirabayashi Yudai Fujii Yoshimasa Matsumura

    Fluorine-free single-component polyelectrolytes were developed via the hybridization of lithium methanesulfonylsulfonimide (LiMSSI) moieties to poly(ethylene glycol) (PEG) derivatives with different morphologies, and the relationship between the structure and its ionic conductivity was investigated. The PEG-LiMSSI derivatives with one, two, and three LiMSSI end groups were prepared via the concomitant Michael-type addition and lithiation of PEGs and N-methanesulfonylvinylsulfonimide. The ionic conductivity at 60 °C ranged from 1.8 × 10−7 to 2.0 × 10−4 S/cm. PEG-LiMSSI derivatives with one LiMSSI terminus and with two LiMSSI termini at both ends show higher ionic conductivity, that is as good as fluorine-free single-component polyelectrolytes, than that with two LiMSSI termini at one end and that with three LiMSSI termini.

    ]]>
    Fluorine-Free Single-Component Polyelectrolyte of Poly(ethylene glycol) Bearing Lithium Methanesulfonylsulfonimide Terminal Groups: Effect of Structural Variance on Ionic Conductivity Bungo Ochiai Koki Hirabayashi Yudai Fujii Yoshimasa Matsumura doi: 10.3390/technologies12050065 Technologies 2024-05-09 Technologies 2024-05-09 12 5
    Article
    65 10.3390/technologies12050065 https://www.mdpi.com/2227-7080/12/5/65
    Technologies, Vol. 12, Pages 64: Atomic Quantum Technologies for Quantum Matter and Fundamental Physics Applications https://www.mdpi.com/2227-7080/12/5/64 Physics is living an era of unprecedented cross-fertilization among the different areas of science. In this perspective review, we discuss the manifold impact that state-of-the-art cold and ultracold-atomic platforms can have in fundamental and applied science through the development of platforms for quantum simulation, computation, metrology and sensing. We illustrate how the engineering of table-top experiments with atom technologies is engendering applications to understand problems in condensed matter and fundamental physics, cosmology and astrophysics, unveil foundational aspects of quantum mechanics, and advance quantum chemistry and the emerging field of quantum biology. In this journey, we take the perspective of two main approaches, i.e., creating quantum analogues and building quantum simulators, highlighting that independently of the ultimate goal of a universal quantum computer to be met, the remarkable transformative effects of these achievements remain unchanged. We wish to convey three main messages. First, this atom-based quantum technology enterprise is signing a new era in the way quantum technologies are used for fundamental science, even beyond the advancement of knowledge, which is characterised by truly cross-disciplinary research, extended interplay between theoretical and experimental thinking, and intersectoral approach. Second, quantum many-body physics is unavoidably taking center stage in frontier’s science. Third, quantum science and technology progress will have capillary impact on society, meaning this effect is not confined to isolated or highly specialized areas of knowledge, but is expected to reach and have a pervasive influence on a broad range of society aspects: while this happens, the adoption of a responsible research and innovation approach to quantum technologies is mandatory, to accompany citizens in building awareness and future scaffolding. Following on all the above reflections, this perspective review is thus aimed at scientists active or interested in interdisciplinary research, providing the reader with an overview of the current status of these wide fields of research where cold and ultracold-atomic platforms play a vital role in their description and simulation. 2024-05-07 Technologies, Vol. 12, Pages 64: Atomic Quantum Technologies for Quantum Matter and Fundamental Physics Applications

    Technologies doi: 10.3390/technologies12050064

    Authors: Jorge Yago Malo Luca Lepori Laura Gentini Maria Luisa (Marilù) Chiofalo

    Physics is living an era of unprecedented cross-fertilization among the different areas of science. In this perspective review, we discuss the manifold impact that state-of-the-art cold and ultracold-atomic platforms can have in fundamental and applied science through the development of platforms for quantum simulation, computation, metrology and sensing. We illustrate how the engineering of table-top experiments with atom technologies is engendering applications to understand problems in condensed matter and fundamental physics, cosmology and astrophysics, unveil foundational aspects of quantum mechanics, and advance quantum chemistry and the emerging field of quantum biology. In this journey, we take the perspective of two main approaches, i.e., creating quantum analogues and building quantum simulators, highlighting that independently of the ultimate goal of a universal quantum computer to be met, the remarkable transformative effects of these achievements remain unchanged. We wish to convey three main messages. First, this atom-based quantum technology enterprise is signing a new era in the way quantum technologies are used for fundamental science, even beyond the advancement of knowledge, which is characterised by truly cross-disciplinary research, extended interplay between theoretical and experimental thinking, and intersectoral approach. Second, quantum many-body physics is unavoidably taking center stage in frontier’s science. Third, quantum science and technology progress will have capillary impact on society, meaning this effect is not confined to isolated or highly specialized areas of knowledge, but is expected to reach and have a pervasive influence on a broad range of society aspects: while this happens, the adoption of a responsible research and innovation approach to quantum technologies is mandatory, to accompany citizens in building awareness and future scaffolding. Following on all the above reflections, this perspective review is thus aimed at scientists active or interested in interdisciplinary research, providing the reader with an overview of the current status of these wide fields of research where cold and ultracold-atomic platforms play a vital role in their description and simulation.

    ]]>
    Atomic Quantum Technologies for Quantum Matter and Fundamental Physics Applications Jorge Yago Malo Luca Lepori Laura Gentini Maria Luisa (Marilù) Chiofalo doi: 10.3390/technologies12050064 Technologies 2024-05-07 Technologies 2024-05-07 12 5
    Review
    64 10.3390/technologies12050064 https://www.mdpi.com/2227-7080/12/5/64
    Technologies, Vol. 12, Pages 63: Hunting Search Algorithm-Based Adaptive Fuzzy Tracking Controller for an Aero-Pendulum https://www.mdpi.com/2227-7080/12/5/63 The aero-pendulum is a non-linear system used broadly to develop and test new controller strategies. This paper presents a new methodology for an adaptive PID fuzzy-based tracking controller using a Hunting Search (HuS) algorithm. The HuS algorithm computes the parameters of the membership functions of the fuzzification stage. As a novelty, the algorithm guarantees the overlap of the membership functions to ensure that all the functions are interconnected, generating new hunters to search for better solutions in the overlapping area. For the defuzzification stage, the HuS algorithm sets the singletons in optimal positions to evaluate the controller response using the centroid method. To probe the robustness of the methodology, the PID fuzzy controller algorithm is implemented in an embedded system to track the angular position of an aero-pendulum test bench. The results show that the adaptive PID fuzzy controller proposed presents root mean square error values of 0.42, 0.40, and 0.49 for 80, 90, and 100 degrees, respectively. 2024-05-04 Technologies, Vol. 12, Pages 63: Hunting Search Algorithm-Based Adaptive Fuzzy Tracking Controller for an Aero-Pendulum

    Technologies doi: 10.3390/technologies12050063

    Authors: Ricardo Rojas-Galván José R. García-Martínez Edson E. Cruz-Miguel Omar A. Barra-Vázquez Luis F. Olmedo-García Juvenal Rodríguez-Reséndiz

    The aero-pendulum is a non-linear system used broadly to develop and test new controller strategies. This paper presents a new methodology for an adaptive PID fuzzy-based tracking controller using a Hunting Search (HuS) algorithm. The HuS algorithm computes the parameters of the membership functions of the fuzzification stage. As a novelty, the algorithm guarantees the overlap of the membership functions to ensure that all the functions are interconnected, generating new hunters to search for better solutions in the overlapping area. For the defuzzification stage, the HuS algorithm sets the singletons in optimal positions to evaluate the controller response using the centroid method. To probe the robustness of the methodology, the PID fuzzy controller algorithm is implemented in an embedded system to track the angular position of an aero-pendulum test bench. The results show that the adaptive PID fuzzy controller proposed presents root mean square error values of 0.42, 0.40, and 0.49 for 80, 90, and 100 degrees, respectively.

    ]]>
    Hunting Search Algorithm-Based Adaptive Fuzzy Tracking Controller for an Aero-Pendulum Ricardo Rojas-Galván José R. García-Martínez Edson E. Cruz-Miguel Omar A. Barra-Vázquez Luis F. Olmedo-García Juvenal Rodríguez-Reséndiz doi: 10.3390/technologies12050063 Technologies 2024-05-04 Technologies 2024-05-04 12 5
    Article
    63 10.3390/technologies12050063 https://www.mdpi.com/2227-7080/12/5/63
    Technologies, Vol. 12, Pages 62: Inference Analysis of Video Quality of Experience in Relation with Face Emotion, Video Advertisement, and ITU-T P.1203 https://www.mdpi.com/2227-7080/12/5/62 This study introduces an FER-based machine learning framework for real-time QoE assessment in video streaming. This study’s aim is to address the challenges posed by end-to-end encryption and video advertisement while enhancing user QoE. Our proposed framework significantly outperforms the base reference, ITU-T P.1203, by up to 37.1% in terms of accuracy and 21.74% after attribute selection. Our study contributes to the field in two ways. First, we offer a promising solution to enhance user satisfaction in video streaming services via real-time user emotion and user feedback integration, providing a more holistic understanding of user experience. Second, high-quality data collection and insights are offered by collecting real data from diverse regions to minimize any potential biases and provide advertisement placement suggestions. 2024-05-03 Technologies, Vol. 12, Pages 62: Inference Analysis of Video Quality of Experience in Relation with Face Emotion, Video Advertisement, and ITU-T P.1203

    Technologies doi: 10.3390/technologies12050062

    Authors: Tisa Selma Mohammad Mehedy Masud Abdelhak Bentaleb Saad Harous

    This study introduces an FER-based machine learning framework for real-time QoE assessment in video streaming. This study’s aim is to address the challenges posed by end-to-end encryption and video advertisement while enhancing user QoE. Our proposed framework significantly outperforms the base reference, ITU-T P.1203, by up to 37.1% in terms of accuracy and 21.74% after attribute selection. Our study contributes to the field in two ways. First, we offer a promising solution to enhance user satisfaction in video streaming services via real-time user emotion and user feedback integration, providing a more holistic understanding of user experience. Second, high-quality data collection and insights are offered by collecting real data from diverse regions to minimize any potential biases and provide advertisement placement suggestions.

    ]]>
    Inference Analysis of Video Quality of Experience in Relation with Face Emotion, Video Advertisement, and ITU-T P.1203 Tisa Selma Mohammad Mehedy Masud Abdelhak Bentaleb Saad Harous doi: 10.3390/technologies12050062 Technologies 2024-05-03 Technologies 2024-05-03 12 5
    Article
    62 10.3390/technologies12050062 https://www.mdpi.com/2227-7080/12/5/62
    Technologies, Vol. 12, Pages 61: New Upgrade to Improve Operation of Conventional Grid-Connected Photovoltaic Systems https://www.mdpi.com/2227-7080/12/5/61 The incorporation of distributed generation with photovoltaic systems entails a drawback associated with intermittency in the generation capacity due to variations in the solar resource. In general, this aspect limits the level of penetration that this resource can have without producing an appreciable impact on the quality of the electrical supply. With the intention of reducing its intermittency, this paper presents the characterization of a methodology for maximizing grid-connected PV system operation under low-solar-radiation conditions. A new concept of a hybrid system based on a constant current source and capable of integrating different sources into a conventional grid-connected PV system is presented. Results of an experimental characterization of a low-voltage grid–PV system connection with a DC/DC converter for constant-current source application are shown in zero and non-zero radiation conditions. The results obtained demonstrate that the proposed integration method works efficiently without causing appreciable effects on the parameters that define the quality of the electrical supply. In this way, it is possible to efficiently incorporate another source of energy, taking advantage of the characteristics of the GCPVS without further interventions in the system. It is expected that this topology could help to integrate other generation and/or storage technologies into already existing PV systems, opening a wide field of research in the PV systems area. 2024-05-02 Technologies, Vol. 12, Pages 61: New Upgrade to Improve Operation of Conventional Grid-Connected Photovoltaic Systems

    Technologies doi: 10.3390/technologies12050061

    Authors: Manuel Cáceres Alexis Raúl González Mayans Andrés Firman Luis Vera Juan de la Casa Higueras

    The incorporation of distributed generation with photovoltaic systems entails a drawback associated with intermittency in the generation capacity due to variations in the solar resource. In general, this aspect limits the level of penetration that this resource can have without producing an appreciable impact on the quality of the electrical supply. With the intention of reducing its intermittency, this paper presents the characterization of a methodology for maximizing grid-connected PV system operation under low-solar-radiation conditions. A new concept of a hybrid system based on a constant current source and capable of integrating different sources into a conventional grid-connected PV system is presented. Results of an experimental characterization of a low-voltage grid–PV system connection with a DC/DC converter for constant-current source application are shown in zero and non-zero radiation conditions. The results obtained demonstrate that the proposed integration method works efficiently without causing appreciable effects on the parameters that define the quality of the electrical supply. In this way, it is possible to efficiently incorporate another source of energy, taking advantage of the characteristics of the GCPVS without further interventions in the system. It is expected that this topology could help to integrate other generation and/or storage technologies into already existing PV systems, opening a wide field of research in the PV systems area.

    ]]>
    New Upgrade to Improve Operation of Conventional Grid-Connected Photovoltaic Systems Manuel Cáceres Alexis Raúl González Mayans Andrés Firman Luis Vera Juan de la Casa Higueras doi: 10.3390/technologies12050061 Technologies 2024-05-02 Technologies 2024-05-02 12 5
    Article
    61 10.3390/technologies12050061 https://www.mdpi.com/2227-7080/12/5/61
    Technologies, Vol. 12, Pages 60: A Cyber–Physical System Based on Digital Twin and 3D SCADA for Real-Time Monitoring of Olive Oil Mills https://www.mdpi.com/2227-7080/12/5/60 Cyber–physical systems involve the creation, continuous updating, and monitoring of virtual replicas that closely mirror their physical counterparts. These virtual representations are fed by real-time data from sensors, Internet of Things (IoT) devices, and other sources, enabling a dynamic and accurate reflection of the state of the physical system. This emphasizes the importance of data synchronization, visualization, and interaction within virtual environments as a means to improve decision-making, training, maintenance, and overall operational efficiency. This paper presents a novel approach to a cyber–physical system that integrates virtual reality (VR)-based digital twins and 3D SCADA in the context of Industry 4.0 for the monitoring and optimization of an olive mill. The methodology leverages virtual reality to create a digital twin that enables immersive data-driven simulations for olive mill monitoring. The proposed CPS takes data from the physical environment through the existing sensors and measurement elements in the olive mill, concentrates them, and exposes them to the virtual environment through the Open Platform Communication United Architecture (OPC-UA) protocol, thus establishing bidirectional and real-time communication. Furthermore, in the proposed virtual environment, the digital twin is interfaced with the 3D SCADA system, allowing it to create virtual models of the process. This innovative approach has the potential to revolutionize the olive oil industry by improving operational efficiency, product quality, and sustainability while optimizing maintenance practices. 2024-04-30 Technologies, Vol. 12, Pages 60: A Cyber–Physical System Based on Digital Twin and 3D SCADA for Real-Time Monitoring of Olive Oil Mills

    Technologies doi: 10.3390/technologies12050060

    Authors: Cristina Martinez-Ruedas Jose-Maria Flores-Arias Isabel M. Moreno-Garcia Matias Linan-Reyes Francisco Jose Bellido-Outeiriño

    Cyber–physical systems involve the creation, continuous updating, and monitoring of virtual replicas that closely mirror their physical counterparts. These virtual representations are fed by real-time data from sensors, Internet of Things (IoT) devices, and other sources, enabling a dynamic and accurate reflection of the state of the physical system. This emphasizes the importance of data synchronization, visualization, and interaction within virtual environments as a means to improve decision-making, training, maintenance, and overall operational efficiency. This paper presents a novel approach to a cyber–physical system that integrates virtual reality (VR)-based digital twins and 3D SCADA in the context of Industry 4.0 for the monitoring and optimization of an olive mill. The methodology leverages virtual reality to create a digital twin that enables immersive data-driven simulations for olive mill monitoring. The proposed CPS takes data from the physical environment through the existing sensors and measurement elements in the olive mill, concentrates them, and exposes them to the virtual environment through the Open Platform Communication United Architecture (OPC-UA) protocol, thus establishing bidirectional and real-time communication. Furthermore, in the proposed virtual environment, the digital twin is interfaced with the 3D SCADA system, allowing it to create virtual models of the process. This innovative approach has the potential to revolutionize the olive oil industry by improving operational efficiency, product quality, and sustainability while optimizing maintenance practices.

    ]]>
    A Cyber–Physical System Based on Digital Twin and 3D SCADA for Real-Time Monitoring of Olive Oil Mills Cristina Martinez-Ruedas Jose-Maria Flores-Arias Isabel M. Moreno-Garcia Matias Linan-Reyes Francisco Jose Bellido-Outeiriño doi: 10.3390/technologies12050060 Technologies 2024-04-30 Technologies 2024-04-30 12 5
    Article
    60 10.3390/technologies12050060 https://www.mdpi.com/2227-7080/12/5/60
    Technologies, Vol. 12, Pages 59: Neural Network-Based Body Weight Prediction in Pelibuey Sheep through Biometric Measurements https://www.mdpi.com/2227-7080/12/5/59 This paper presents an intelligent system for the dynamic estimation of sheep body weight (BW). The methodology used to estimate body weight is based on measuring seven biometric parameters: height at withers, rump height, body length, body diagonal length, total body length, semicircumference of the abdomen, and semicircumference of the girth. A biometric parameter acquisition system was developed using a Kinect as a sensor. The results were contrasted with measurements obtained manually with a flexometer. The comparison gives an average root mean square error (RMSE) of 9.91 and a mean R2 of 0.81. Subsequently, the parameters were used as input in a back-propagation artificial neural network. Performance tests were performed with different combinations to make the best choice of architecture. In this way, an intelligent body weight estimation system was obtained from biometric parameters, with a 5.8% RMSE in the weight estimations for the best architecture. This approach represents an innovative, feasible, and economical alternative to contribute to decision-making in livestock production systems. 2024-04-30 Technologies, Vol. 12, Pages 59: Neural Network-Based Body Weight Prediction in Pelibuey Sheep through Biometric Measurements

    Technologies doi: 10.3390/technologies12050059

    Authors: Alfonso J. Chay-Canul Enrique Camacho-Pérez Fernando Casanova-Lugo Omar Rodríguez-Abreo Mayra Cruz-Fernández Juvenal Rodríguez-Reséndiz

    This paper presents an intelligent system for the dynamic estimation of sheep body weight (BW). The methodology used to estimate body weight is based on measuring seven biometric parameters: height at withers, rump height, body length, body diagonal length, total body length, semicircumference of the abdomen, and semicircumference of the girth. A biometric parameter acquisition system was developed using a Kinect as a sensor. The results were contrasted with measurements obtained manually with a flexometer. The comparison gives an average root mean square error (RMSE) of 9.91 and a mean R2 of 0.81. Subsequently, the parameters were used as input in a back-propagation artificial neural network. Performance tests were performed with different combinations to make the best choice of architecture. In this way, an intelligent body weight estimation system was obtained from biometric parameters, with a 5.8% RMSE in the weight estimations for the best architecture. This approach represents an innovative, feasible, and economical alternative to contribute to decision-making in livestock production systems.

    ]]>
    Neural Network-Based Body Weight Prediction in Pelibuey Sheep through Biometric Measurements Alfonso J. Chay-Canul Enrique Camacho-Pérez Fernando Casanova-Lugo Omar Rodríguez-Abreo Mayra Cruz-Fernández Juvenal Rodríguez-Reséndiz doi: 10.3390/technologies12050059 Technologies 2024-04-30 Technologies 2024-04-30 12 5
    Article
    59 10.3390/technologies12050059 https://www.mdpi.com/2227-7080/12/5/59
    Technologies, Vol. 12, Pages 58: RFID Tags for On-Metal Applications: A Brief Survey https://www.mdpi.com/2227-7080/12/5/58 Radio-frequency identification technology finds extensive use in various industrial applications, including those involving metallic surfaces. The integration of radio-frequency identification systems with metal surfaces, such as those found in the automotive sector, presents distinct challenges that can notably affect system efficacy due to metal’s tendency to reflect electromagnetic waves, thus degrading the functionality of conventional radio-frequency identification tags. This highlights the importance of conducting research into academic publications and patents to grasp the current advancements and challenges in this field, aiming to improve the applications of radio-frequency identification tags technology on metal. Consequently, this research undertakes a concise review of both the literature and patents exploring radio-frequency identification technology’s use for on-metal tags, utilizing resources like Google Scholar and Google Patents. The research categorized crucial aspects such as tag flexibility, operating frequency, and geographic origins of the research. Findings highlight China’s prominent role in contributing to metal-focused radio-frequency identification tag research, with a considerable volume of articles and patents. In particular, flexible tags and the Ultra-High Frequency range are dominant in both scholarly and patent documents, reflecting their significance in radio-frequency identification technology applications. The research underscores a vibrant area of development within radio-frequency identification technology, with continued innovation driven by specific industrial needs. Despite the noted advances, the presence of a significant percentage of no longer valid patents suggests substantial opportunities for further research and innovation in radio-frequency identification technology for on-metal applications, especially considering the demand for flexible tags and for solutions in systems that offer specialized characteristics or are tailored for specific uses. 2024-04-27 Technologies, Vol. 12, Pages 58: RFID Tags for On-Metal Applications: A Brief Survey

    Technologies doi: 10.3390/technologies12050058

    Authors: Emanuel Pereira Sandoval Júnior Luís Felipe Vieira Silva Mateus Batista Eliel Santos Ícaro Araújo Jobson Araújo Erick Barboza Francisco Gomes Ismael Trindade Fraga Daniel Oliveira Dos Santos Roger Davanso

    Radio-frequency identification technology finds extensive use in various industrial applications, including those involving metallic surfaces. The integration of radio-frequency identification systems with metal surfaces, such as those found in the automotive sector, presents distinct challenges that can notably affect system efficacy due to metal’s tendency to reflect electromagnetic waves, thus degrading the functionality of conventional radio-frequency identification tags. This highlights the importance of conducting research into academic publications and patents to grasp the current advancements and challenges in this field, aiming to improve the applications of radio-frequency identification tags technology on metal. Consequently, this research undertakes a concise review of both the literature and patents exploring radio-frequency identification technology’s use for on-metal tags, utilizing resources like Google Scholar and Google Patents. The research categorized crucial aspects such as tag flexibility, operating frequency, and geographic origins of the research. Findings highlight China’s prominent role in contributing to metal-focused radio-frequency identification tag research, with a considerable volume of articles and patents. In particular, flexible tags and the Ultra-High Frequency range are dominant in both scholarly and patent documents, reflecting their significance in radio-frequency identification technology applications. The research underscores a vibrant area of development within radio-frequency identification technology, with continued innovation driven by specific industrial needs. Despite the noted advances, the presence of a significant percentage of no longer valid patents suggests substantial opportunities for further research and innovation in radio-frequency identification technology for on-metal applications, especially considering the demand for flexible tags and for solutions in systems that offer specialized characteristics or are tailored for specific uses.

    ]]>
    RFID Tags for On-Metal Applications: A Brief Survey Emanuel Pereira Sandoval Júnior Luís Felipe Vieira Silva Mateus Batista Eliel Santos Ícaro Araújo Jobson Araújo Erick Barboza Francisco Gomes Ismael Trindade Fraga Daniel Oliveira Dos Santos Roger Davanso doi: 10.3390/technologies12050058 Technologies 2024-04-27 Technologies 2024-04-27 12 5
    Article
    58 10.3390/technologies12050058 https://www.mdpi.com/2227-7080/12/5/58
    Technologies, Vol. 12, Pages 57: Miniaturized Microstrip Dual-Channel Diplexer Based on Modified Meander Line Resonators for Wireless and Computer Communication Technologies https://www.mdpi.com/2227-7080/12/5/57 There has been a lot of interest in microstrip diplexers lately due to their potential use in numerous wireless and computer communication technologies, including radio broadcasts, mobile phones, broadband wireless, and satellite-based communication systems. It can do this because it has a communication channel that can combine two distinct filters into one. This article presents a narrow-band microstrip diplexer that uses a stepped impedance resonator, a uniform impedance resonator, tiny square patches, and a meander line resonator. The projected diplexer might be made smaller than its initial dimensions by utilizing the winding construction. To model the microstrip diplexer topology for WiMAX and WIFI/WLAN at 1.66 GHz and 2.52 GHz, the Advanced Wave Research (AWR) solver was employed. It exhibited an insertion loss of 3.2 dB and a return loss of 16 dB for the first channel, while the insertion loss and return loss were 2.88 dB and 21 dB, respectively, for the second channel. When both filters were simulated, the band isolation was 31 dB. The projected microstrip diplexer has been fabricated using an FR4 epoxy laminate with dimensions of 32 × 26 mm2. The simulated S-parameters phase and group delay closely matched the measurements. 2024-04-24 Technologies, Vol. 12, Pages 57: Miniaturized Microstrip Dual-Channel Diplexer Based on Modified Meander Line Resonators for Wireless and Computer Communication Technologies

    Technologies doi: 10.3390/technologies12050057

    Authors: Yaqeen Sabah Mezaal Shahad K. Khaleel Ban M. Alameri Kadhum Al-Majdi Aqeel A. Al-Hilali

    There has been a lot of interest in microstrip diplexers lately due to their potential use in numerous wireless and computer communication technologies, including radio broadcasts, mobile phones, broadband wireless, and satellite-based communication systems. It can do this because it has a communication channel that can combine two distinct filters into one. This article presents a narrow-band microstrip diplexer that uses a stepped impedance resonator, a uniform impedance resonator, tiny square patches, and a meander line resonator. The projected diplexer might be made smaller than its initial dimensions by utilizing the winding construction. To model the microstrip diplexer topology for WiMAX and WIFI/WLAN at 1.66 GHz and 2.52 GHz, the Advanced Wave Research (AWR) solver was employed. It exhibited an insertion loss of 3.2 dB and a return loss of 16 dB for the first channel, while the insertion loss and return loss were 2.88 dB and 21 dB, respectively, for the second channel. When both filters were simulated, the band isolation was 31 dB. The projected microstrip diplexer has been fabricated using an FR4 epoxy laminate with dimensions of 32 × 26 mm2. The simulated S-parameters phase and group delay closely matched the measurements.

    ]]>
    Miniaturized Microstrip Dual-Channel Diplexer Based on Modified Meander Line Resonators for Wireless and Computer Communication Technologies Yaqeen Sabah Mezaal Shahad K. Khaleel Ban M. Alameri Kadhum Al-Majdi Aqeel A. Al-Hilali doi: 10.3390/technologies12050057 Technologies 2024-04-24 Technologies 2024-04-24 12 5
    Article
    57 10.3390/technologies12050057 https://www.mdpi.com/2227-7080/12/5/57
    Technologies, Vol. 12, Pages 56: An End-to-End Lightweight Multi-Scale CNN for the Classification of Lung and Colon Cancer with XAI Integration https://www.mdpi.com/2227-7080/12/4/56 To effectively treat lung and colon cancer and save lives, early and accurate identification is essential. Conventional diagnosis takes a long time and requires the manual expertise of radiologists. The rising number of new cancer cases makes it challenging to process massive volumes of data quickly. Different machine learning approaches to the classification and detection of lung and colon cancer have been proposed by multiple research studies. However, when it comes to self-learning classification and detection tasks, deep learning (DL) excels. This paper suggests a novel DL convolutional neural network (CNN) model for detecting lung and colon cancer. The proposed model is lightweight and multi-scale since it uses only 1.1 million parameters, making it appropriate for real-time applications as it provides an end-to-end solution. By incorporating features extracted at multiple scales, the model can effectively capture both local and global patterns within the input data. The explainability tools such as gradient-weighted class activation mapping and Shapley additive explanation can identify potential problems by highlighting the specific input data areas that have an impact on the model’s choice. The experimental findings demonstrate that for lung and colon cancer detection, the proposed model was outperformed by the competition and accuracy rates of 99.20% have been achieved for multi-class (containing five classes) predictions. 2024-04-21 Technologies, Vol. 12, Pages 56: An End-to-End Lightweight Multi-Scale CNN for the Classification of Lung and Colon Cancer with XAI Integration

    Technologies doi: 10.3390/technologies12040056

    Authors: Mohammad Asif Hasan Fariha Haque Saifur Rahman Sabuj Hasan Sarker Md. Omaer Faruq Goni Fahmida Rahman Md Mamunur Rashid

    To effectively treat lung and colon cancer and save lives, early and accurate identification is essential. Conventional diagnosis takes a long time and requires the manual expertise of radiologists. The rising number of new cancer cases makes it challenging to process massive volumes of data quickly. Different machine learning approaches to the classification and detection of lung and colon cancer have been proposed by multiple research studies. However, when it comes to self-learning classification and detection tasks, deep learning (DL) excels. This paper suggests a novel DL convolutional neural network (CNN) model for detecting lung and colon cancer. The proposed model is lightweight and multi-scale since it uses only 1.1 million parameters, making it appropriate for real-time applications as it provides an end-to-end solution. By incorporating features extracted at multiple scales, the model can effectively capture both local and global patterns within the input data. The explainability tools such as gradient-weighted class activation mapping and Shapley additive explanation can identify potential problems by highlighting the specific input data areas that have an impact on the model’s choice. The experimental findings demonstrate that for lung and colon cancer detection, the proposed model was outperformed by the competition and accuracy rates of 99.20% have been achieved for multi-class (containing five classes) predictions.

    ]]>
    An End-to-End Lightweight Multi-Scale CNN for the Classification of Lung and Colon Cancer with XAI Integration Mohammad Asif Hasan Fariha Haque Saifur Rahman Sabuj Hasan Sarker Md. Omaer Faruq Goni Fahmida Rahman Md Mamunur Rashid doi: 10.3390/technologies12040056 Technologies 2024-04-21 Technologies 2024-04-21 12 4
    Article
    56 10.3390/technologies12040056 https://www.mdpi.com/2227-7080/12/4/56
    Technologies, Vol. 12, Pages 55: Digital Twin Models for Personalised and Predictive Medicine in Ophthalmology https://www.mdpi.com/2227-7080/12/4/55 This article explores the integration of Digital Twins in Systems and Predictive Medicine, focusing on eye diagnosis. By utilizing the Digital Twin models, the proposed framework can support early diagnosis and predict evolution after treatment by providing customized simulation scenarios. Furthermore, a structured architectural framework comprising five levels has been proposed, integrating Digital Twin, Systems Medicine, and Predictive Medicine for managing eye diseases. Based on demographic parameters, statistics were performed to identify potential correlations that may contribute to predispositions to glaucoma. With the aid of a dataset, a neural network was trained with the goal of identifying glaucoma. This comprehensive approach, based on statistical analysis and Machine Learning, is a promising method to enhance diagnostic accuracy and provide personalized treatment approaches. 2024-04-18 Technologies, Vol. 12, Pages 55: Digital Twin Models for Personalised and Predictive Medicine in Ophthalmology

    Technologies doi: 10.3390/technologies12040055

    Authors: Miruna-Elena Iliuţă Mihnea-Alexandru Moisescu Simona-Iuliana Caramihai Alexandra Cernian Eugen Pop Daniel-Ioan Chiş Traian-Costin Mitulescu

    This article explores the integration of Digital Twins in Systems and Predictive Medicine, focusing on eye diagnosis. By utilizing the Digital Twin models, the proposed framework can support early diagnosis and predict evolution after treatment by providing customized simulation scenarios. Furthermore, a structured architectural framework comprising five levels has been proposed, integrating Digital Twin, Systems Medicine, and Predictive Medicine for managing eye diseases. Based on demographic parameters, statistics were performed to identify potential correlations that may contribute to predispositions to glaucoma. With the aid of a dataset, a neural network was trained with the goal of identifying glaucoma. This comprehensive approach, based on statistical analysis and Machine Learning, is a promising method to enhance diagnostic accuracy and provide personalized treatment approaches.

    ]]>
    Digital Twin Models for Personalised and Predictive Medicine in Ophthalmology Miruna-Elena Iliuţă Mihnea-Alexandru Moisescu Simona-Iuliana Caramihai Alexandra Cernian Eugen Pop Daniel-Ioan Chiş Traian-Costin Mitulescu doi: 10.3390/technologies12040055 Technologies 2024-04-18 Technologies 2024-04-18 12 4
    Article
    55 10.3390/technologies12040055 https://www.mdpi.com/2227-7080/12/4/55
    Technologies, Vol. 12, Pages 54: Experimental and Numerical Analysis of a Novel Cycloid-Type Rotor versus S-Type Rotor for Vertical-Axis Wind Turbine https://www.mdpi.com/2227-7080/12/4/54 The performance of a new vertical-axis wind turbine rotor based on the mathematical equation of the cycloid is analyzed and compared through simulation and experimental testing against a semicircular or S-type rotor, which is widely used. The study examines three cases: equalizing the diameter, chord length and the area under the curve. Computational Fluid Dynamics (CFD) was used to simulate these cases and evaluate moment, angular velocity and power. Experimental validation was carried out in a wind tunnel that was designed and optimized with the support of CFD. The rotors for all three cases were 3D printed in resin to analyze their experimental performance as a function of wind speed. The moment and Maximum Power Point (MPP) were determined in each case. The simulation results indicate that the cycloid-type rotor outperforms the semicircular or S-type rotor by 15%. Additionally, experimental evidence confirms that the cycloid-type rotor performs better in all three cases. In the MPP analysis, the cycloid-type rotor achieved an efficiency of 10.8% which was 38% better than the S-type rotor. 2024-04-17 Technologies, Vol. 12, Pages 54: Experimental and Numerical Analysis of a Novel Cycloid-Type Rotor versus S-Type Rotor for Vertical-Axis Wind Turbine

    Technologies doi: 10.3390/technologies12040054

    Authors: José Eli Eduardo González-Durán Juan Manuel Olivares-Ramírez María Angélica Luján-Vega Juan Emigdio Soto-Osornio Juan Manuel García-Guendulain Juvenal Rodriguez-Resendiz

    The performance of a new vertical-axis wind turbine rotor based on the mathematical equation of the cycloid is analyzed and compared through simulation and experimental testing against a semicircular or S-type rotor, which is widely used. The study examines three cases: equalizing the diameter, chord length and the area under the curve. Computational Fluid Dynamics (CFD) was used to simulate these cases and evaluate moment, angular velocity and power. Experimental validation was carried out in a wind tunnel that was designed and optimized with the support of CFD. The rotors for all three cases were 3D printed in resin to analyze their experimental performance as a function of wind speed. The moment and Maximum Power Point (MPP) were determined in each case. The simulation results indicate that the cycloid-type rotor outperforms the semicircular or S-type rotor by 15%. Additionally, experimental evidence confirms that the cycloid-type rotor performs better in all three cases. In the MPP analysis, the cycloid-type rotor achieved an efficiency of 10.8% which was 38% better than the S-type rotor.

    ]]>
    Experimental and Numerical Analysis of a Novel Cycloid-Type Rotor versus S-Type Rotor for Vertical-Axis Wind Turbine José Eli Eduardo González-Durán Juan Manuel Olivares-Ramírez María Angélica Luján-Vega Juan Emigdio Soto-Osornio Juan Manuel García-Guendulain Juvenal Rodriguez-Resendiz doi: 10.3390/technologies12040054 Technologies 2024-04-17 Technologies 2024-04-17 12 4
    Article
    54 10.3390/technologies12040054 https://www.mdpi.com/2227-7080/12/4/54
    Technologies, Vol. 12, Pages 53: Developing a Performance Evaluation Framework Structural Model for Educational Metaverse https://www.mdpi.com/2227-7080/12/4/53 In response to the transformative impact of digital technology on education, this study introduces a novel performance management framework for virtual learning environments suitable for the metaverse era. Based on the Structural Equation Modeling (SEM) approach, this paper proposes a comprehensive evaluative model, anchored on the integration of the Theory of Planned Behavior (TPB), the Unified Theory of Acceptance and Use of Technology (UTAUT), and the Community of Inquiry Framework (CoI). The model synthesizes five Key Performance Indicators (KPIs)—content delivery, student engagement, metaverse tool utilization, student performance, and adaptability—to intricately assess academic avatar performances in virtual educational settings. This theoretical approach marks a significant stride in understanding and enhancing avatar efficacy in the metaverse environment. It enriches the discourse on performance management in digital education and sets a foundation for future empirical studies. As virtual online environments gain prominence in education and training, this research study establishes the basic principles and highlights the key points for further empirical research in the new era of the metaverse educational environment. 2024-04-16 Technologies, Vol. 12, Pages 53: Developing a Performance Evaluation Framework Structural Model for Educational Metaverse

    Technologies doi: 10.3390/technologies12040053

    Authors: Elena Tsappi Ioannis Deliyannis George Nathaniel Papageorgiou

    In response to the transformative impact of digital technology on education, this study introduces a novel performance management framework for virtual learning environments suitable for the metaverse era. Based on the Structural Equation Modeling (SEM) approach, this paper proposes a comprehensive evaluative model, anchored on the integration of the Theory of Planned Behavior (TPB), the Unified Theory of Acceptance and Use of Technology (UTAUT), and the Community of Inquiry Framework (CoI). The model synthesizes five Key Performance Indicators (KPIs)—content delivery, student engagement, metaverse tool utilization, student performance, and adaptability—to intricately assess academic avatar performances in virtual educational settings. This theoretical approach marks a significant stride in understanding and enhancing avatar efficacy in the metaverse environment. It enriches the discourse on performance management in digital education and sets a foundation for future empirical studies. As virtual online environments gain prominence in education and training, this research study establishes the basic principles and highlights the key points for further empirical research in the new era of the metaverse educational environment.

    ]]>
    Developing a Performance Evaluation Framework Structural Model for Educational Metaverse Elena Tsappi Ioannis Deliyannis George Nathaniel Papageorgiou doi: 10.3390/technologies12040053 Technologies 2024-04-16 Technologies 2024-04-16 12 4
    Article
    53 10.3390/technologies12040053 https://www.mdpi.com/2227-7080/12/4/53
    Technologies, Vol. 12, Pages 52: A Comparison of Machine Learning-Based and Conventional Technologies for Video Compression https://www.mdpi.com/2227-7080/12/4/52 The growing demand for high-quality video transmission over bandwidth-constrained networks and the increasing availability of video content have led to the need for efficient storage and distribution of large video files. To improve the latter, this article offers a comparison of six video compression methods without loss of quality. Particularly, H.255, VP9, AV1, convolutional neural network (CNN), recurrent neural network (RNN), and deep autoencoder (DAE). The proposed decision is to use a dataset of high-quality videos to implement and compare the performance of classical compression algorithms and algorithms based on machine learning. Evaluations of the compression efficiency and the quality of the received images were made on the basis of two metrics: PSNR and SSIM. This comparison revealed the strengths and weaknesses of each approach and provided insights into how machine learning algorithms can be optimized in future research. In general, it contributed to the development of more efficient and effective video compression algorithms that can be useful for a wide range of applications. 2024-04-15 Technologies, Vol. 12, Pages 52: A Comparison of Machine Learning-Based and Conventional Technologies for Video Compression

    Technologies doi: 10.3390/technologies12040052

    Authors: Lesia Mochurad

    The growing demand for high-quality video transmission over bandwidth-constrained networks and the increasing availability of video content have led to the need for efficient storage and distribution of large video files. To improve the latter, this article offers a comparison of six video compression methods without loss of quality. Particularly, H.255, VP9, AV1, convolutional neural network (CNN), recurrent neural network (RNN), and deep autoencoder (DAE). The proposed decision is to use a dataset of high-quality videos to implement and compare the performance of classical compression algorithms and algorithms based on machine learning. Evaluations of the compression efficiency and the quality of the received images were made on the basis of two metrics: PSNR and SSIM. This comparison revealed the strengths and weaknesses of each approach and provided insights into how machine learning algorithms can be optimized in future research. In general, it contributed to the development of more efficient and effective video compression algorithms that can be useful for a wide range of applications.

    ]]>
    A Comparison of Machine Learning-Based and Conventional Technologies for Video Compression Lesia Mochurad doi: 10.3390/technologies12040052 Technologies 2024-04-15 Technologies 2024-04-15 12 4
    Article
    52 10.3390/technologies12040052 https://www.mdpi.com/2227-7080/12/4/52
    Technologies, Vol. 12, Pages 51: Monitoring of Hip Joint Forces and Physical Activity after Total Hip Replacement by an Integrated Piezoelectric Element https://www.mdpi.com/2227-7080/12/4/51 Resultant hip joint forces can currently only be recorded in situ in a laboratory setting using instrumented total hip replacements (THRs) equipped with strain gauges. However, permanent recording is important for monitoring the structural condition of the implant, for therapeutic purposes, for self-reflection, and for research into managing the predicted increasing number of THRs worldwide. Therefore, this study aims to investigate whether a recently proposed THR with an integrated piezoelectric element represents a new possibility for the permanent recording of hip joint forces and the physical activities of the patient. Hip joint forces from nine different daily activities were obtained from the OrthoLoad database and applied to a total hip stem equipped with a piezoelectric element using a uniaxial testing machine. The forces acting on the piezoelectric element were calculated from the generated voltages. The correlation between the calculated forces on the piezoelectric element and the applied forces was investigated, and the regression equations were determined. In addition, the voltage outputs were used to predict the activity with a random forest classifier. The coefficient of determination between the applied maximum forces on the implant and the calculated maximum forces on the piezoelectric element was R2 = 0.97 (p < 0.01). The maximum forces on the THR could be determined via activity-independent determinations with a deviation of 2.49 ± 13.16% and activity-dependent calculation with 0.87 ± 7.28% deviation. The activities could be correctly predicted using the classification model with 95% accuracy. Hence, piezoelectric elements integrated into a total hip stem represent a promising sensor option for the energy-autonomous detection of joint forces and physical activities. 2024-04-09 Technologies, Vol. 12, Pages 51: Monitoring of Hip Joint Forces and Physical Activity after Total Hip Replacement by an Integrated Piezoelectric Element

    Technologies doi: 10.3390/technologies12040051

    Authors: Franziska Geiger Henning Bathel Sascha Spors Rainer Bader Daniel Kluess

    Resultant hip joint forces can currently only be recorded in situ in a laboratory setting using instrumented total hip replacements (THRs) equipped with strain gauges. However, permanent recording is important for monitoring the structural condition of the implant, for therapeutic purposes, for self-reflection, and for research into managing the predicted increasing number of THRs worldwide. Therefore, this study aims to investigate whether a recently proposed THR with an integrated piezoelectric element represents a new possibility for the permanent recording of hip joint forces and the physical activities of the patient. Hip joint forces from nine different daily activities were obtained from the OrthoLoad database and applied to a total hip stem equipped with a piezoelectric element using a uniaxial testing machine. The forces acting on the piezoelectric element were calculated from the generated voltages. The correlation between the calculated forces on the piezoelectric element and the applied forces was investigated, and the regression equations were determined. In addition, the voltage outputs were used to predict the activity with a random forest classifier. The coefficient of determination between the applied maximum forces on the implant and the calculated maximum forces on the piezoelectric element was R2 = 0.97 (p < 0.01). The maximum forces on the THR could be determined via activity-independent determinations with a deviation of 2.49 ± 13.16% and activity-dependent calculation with 0.87 ± 7.28% deviation. The activities could be correctly predicted using the classification model with 95% accuracy. Hence, piezoelectric elements integrated into a total hip stem represent a promising sensor option for the energy-autonomous detection of joint forces and physical activities.

    ]]>
    Monitoring of Hip Joint Forces and Physical Activity after Total Hip Replacement by an Integrated Piezoelectric Element Franziska Geiger Henning Bathel Sascha Spors Rainer Bader Daniel Kluess doi: 10.3390/technologies12040051 Technologies 2024-04-09 Technologies 2024-04-09 12 4
    Article
    51 10.3390/technologies12040051 https://www.mdpi.com/2227-7080/12/4/51
    Technologies, Vol. 12, Pages 50: Past, Present, and Future of New Applications in Utilization of Eddy Currents https://www.mdpi.com/2227-7080/12/4/50 Eddy currents are an electromagnetic phenomenon that represent an inexhaustible source of inspiration for technological innovations in the 21st century. Throughout history, these currents have been a subject of research and technological development in multiple fields. This article delves into the fascinating world of eddy currents, revealing their physical foundations and highlighting their impact on a wide range of applications, ranging from non-destructive evaluation of materials to levitation phenomena, as well as their influence on fields as diverse as medicine, the automotive industry, and aerospace. The nature of eddy currents has stimulated the imaginations of scientists and engineers, driving the creation of revolutionary technologies that are transforming our society. As we progress through this article, we will cover the main aspects of eddy currents, their practical applications, and challenges for future works. 2024-04-09 Technologies, Vol. 12, Pages 50: Past, Present, and Future of New Applications in Utilization of Eddy Currents

    Technologies doi: 10.3390/technologies12040050

    Authors: Nestor O. Romero-Arismendi Juan C. Olivares-Galvan Jose L. Hernandez-Avila Rafael Escarela-Perez Victor M. Jimenez-Mondragon Felipe Gonzalez-Montañez

    Eddy currents are an electromagnetic phenomenon that represent an inexhaustible source of inspiration for technological innovations in the 21st century. Throughout history, these currents have been a subject of research and technological development in multiple fields. This article delves into the fascinating world of eddy currents, revealing their physical foundations and highlighting their impact on a wide range of applications, ranging from non-destructive evaluation of materials to levitation phenomena, as well as their influence on fields as diverse as medicine, the automotive industry, and aerospace. The nature of eddy currents has stimulated the imaginations of scientists and engineers, driving the creation of revolutionary technologies that are transforming our society. As we progress through this article, we will cover the main aspects of eddy currents, their practical applications, and challenges for future works.

    ]]>
    Past, Present, and Future of New Applications in Utilization of Eddy Currents Nestor O. Romero-Arismendi Juan C. Olivares-Galvan Jose L. Hernandez-Avila Rafael Escarela-Perez Victor M. Jimenez-Mondragon Felipe Gonzalez-Montañez doi: 10.3390/technologies12040050 Technologies 2024-04-09 Technologies 2024-04-09 12 4
    Review
    50 10.3390/technologies12040050 https://www.mdpi.com/2227-7080/12/4/50
    Technologies, Vol. 12, Pages 49: Numerical Study of the Influence of the Structural Parameters on the Stress Dissipation of 3D Orthogonal Woven Composites under Low-Velocity Impact https://www.mdpi.com/2227-7080/12/4/49 This study investigates the effects of the number of layers, x-yarn (weft) density, and z-yarn (binder) path on the mechanical behavior of E-glass 3D orthogonal woven (3DOW) composites during low-velocity impacts. Meso-level finite element (FE) models were developed and validated for 3DOW composites with different yarn densities and z-yarn paths, providing analyses of stress distribution within reinforcement fibers and matrix, energy absorption, and failure time. Our findings revealed that lower x-yarn densities led to accumulations of stress concentrations. Furthermore, changing the z-yarn path, such as transitioning from plain weaves to twill or basket weaves had a noticeable impact on stress distributions. The research highlights the significance of designing more resilient 3DOW composites for impact applications by choosing appropriate parameters in weaving composite designs. 2024-04-05 Technologies, Vol. 12, Pages 49: Numerical Study of the Influence of the Structural Parameters on the Stress Dissipation of 3D Orthogonal Woven Composites under Low-Velocity Impact

    Technologies doi: 10.3390/technologies12040049

    Authors: Wang Xu Mohammed Zikry Abdel-Fattah M. Seyam

    This study investigates the effects of the number of layers, x-yarn (weft) density, and z-yarn (binder) path on the mechanical behavior of E-glass 3D orthogonal woven (3DOW) composites during low-velocity impacts. Meso-level finite element (FE) models were developed and validated for 3DOW composites with different yarn densities and z-yarn paths, providing analyses of stress distribution within reinforcement fibers and matrix, energy absorption, and failure time. Our findings revealed that lower x-yarn densities led to accumulations of stress concentrations. Furthermore, changing the z-yarn path, such as transitioning from plain weaves to twill or basket weaves had a noticeable impact on stress distributions. The research highlights the significance of designing more resilient 3DOW composites for impact applications by choosing appropriate parameters in weaving composite designs.

    ]]>
    Numerical Study of the Influence of the Structural Parameters on the Stress Dissipation of 3D Orthogonal Woven Composites under Low-Velocity Impact Wang Xu Mohammed Zikry Abdel-Fattah M. Seyam doi: 10.3390/technologies12040049 Technologies 2024-04-05 Technologies 2024-04-05 12 4
    Article
    49 10.3390/technologies12040049 https://www.mdpi.com/2227-7080/12/4/49
    Technologies, Vol. 12, Pages 48: Carbon Fiber Polymer Reinforced 3D Printed Composites for Centrifugal Pump Impeller Manufacturing https://www.mdpi.com/2227-7080/12/4/48 Centrifugal pumps are used extensively in various everyday applications. The occurrence of corrosion phenomena during operation often leads to the failure of a pump’s operating components, such as the impeller. The present research study examines the utilization of composite materials for fabricating centrifugal pump components using additive manufacturing as an effort to fabricate corrosion resistant parts. To achieve the latter two nanocomposite materials, carbon fiber reinforced polyamide and carbon fiber reinforced polyphenylene sulfide were compared with two metal alloys, cast iron and brass, which are currently used in pump impeller manufacturing. The mechanical properties of the materials are extracted by performing a series of experiments, such as uniaxial tensile tests, nanoindentation and scanning electron microscope (SEM) examination of the specimen’s fracture area. Then, computational fluid dynamics (CFD) analysis is performed using various impeller designs to determine the fluid pressure exerted on the impeller’s geometry during its operation. Finally, the maximum power rating of an impeller that can be made from such composites will be determined using a static finite element model (FEM). The FEM static model is developed by integrating the data collected from the experiments with the results obtained from the CFD analysis. The current research work shows that nanocomposites can potentially be used for developing impellers with rated power of up to 9.41 kW. 2024-04-03 Technologies, Vol. 12, Pages 48: Carbon Fiber Polymer Reinforced 3D Printed Composites for Centrifugal Pump Impeller Manufacturing

    Technologies doi: 10.3390/technologies12040048

    Authors: Gabriel Mansour Vasileios Papageorgiou Dimitrios Tzetzis

    Centrifugal pumps are used extensively in various everyday applications. The occurrence of corrosion phenomena during operation often leads to the failure of a pump’s operating components, such as the impeller. The present research study examines the utilization of composite materials for fabricating centrifugal pump components using additive manufacturing as an effort to fabricate corrosion resistant parts. To achieve the latter two nanocomposite materials, carbon fiber reinforced polyamide and carbon fiber reinforced polyphenylene sulfide were compared with two metal alloys, cast iron and brass, which are currently used in pump impeller manufacturing. The mechanical properties of the materials are extracted by performing a series of experiments, such as uniaxial tensile tests, nanoindentation and scanning electron microscope (SEM) examination of the specimen’s fracture area. Then, computational fluid dynamics (CFD) analysis is performed using various impeller designs to determine the fluid pressure exerted on the impeller’s geometry during its operation. Finally, the maximum power rating of an impeller that can be made from such composites will be determined using a static finite element model (FEM). The FEM static model is developed by integrating the data collected from the experiments with the results obtained from the CFD analysis. The current research work shows that nanocomposites can potentially be used for developing impellers with rated power of up to 9.41 kW.

    ]]>
    Carbon Fiber Polymer Reinforced 3D Printed Composites for Centrifugal Pump Impeller Manufacturing Gabriel Mansour Vasileios Papageorgiou Dimitrios Tzetzis doi: 10.3390/technologies12040048 Technologies 2024-04-03 Technologies 2024-04-03 12 4
    Article
    48 10.3390/technologies12040048 https://www.mdpi.com/2227-7080/12/4/48
    Technologies, Vol. 12, Pages 47: Impact Localization for Haptic Input Devices Using Hybrid Laminates with Sensoric Function https://www.mdpi.com/2227-7080/12/4/47 The required energy savings can be achieved in all automotive domains through weight savings and the merging of manufacturing processes in production. This fact is taken into account through functional integration in lightweight materials and manufacturing in a process close to large-scale production. In previous work, separate steps of a process chain for manufacturing a center console cover utilizing a sensoric hybrid laminate have been developed and evaluated. This includes the process steps of joining, forming and inline polarization as well as connecting to an embedded system. This work continues the research process by evaluating impact localization methods to use the center console as a haptic input device. For this purpose, different deep learning methods are derived from the state of the art and analyzed for their applicability in two consecutive studies. The results show that MLPs, LSTMs, GRUs and CNNs are suitable to localize impacts on the novel laminate with high localization rates of up to 99%, and thus the usability of the developed laminate as a haptic input device has been proven. 2024-04-01 Technologies, Vol. 12, Pages 47: Impact Localization for Haptic Input Devices Using Hybrid Laminates with Sensoric Function

    Technologies doi: 10.3390/technologies12040047

    Authors: René Schmidt Alexander Graf Ricardo Decker Stephan Lede Verena Kräusel Lothar Kroll Wolfram Hardt

    The required energy savings can be achieved in all automotive domains through weight savings and the merging of manufacturing processes in production. This fact is taken into account through functional integration in lightweight materials and manufacturing in a process close to large-scale production. In previous work, separate steps of a process chain for manufacturing a center console cover utilizing a sensoric hybrid laminate have been developed and evaluated. This includes the process steps of joining, forming and inline polarization as well as connecting to an embedded system. This work continues the research process by evaluating impact localization methods to use the center console as a haptic input device. For this purpose, different deep learning methods are derived from the state of the art and analyzed for their applicability in two consecutive studies. The results show that MLPs, LSTMs, GRUs and CNNs are suitable to localize impacts on the novel laminate with high localization rates of up to 99%, and thus the usability of the developed laminate as a haptic input device has been proven.

    ]]>
    Impact Localization for Haptic Input Devices Using Hybrid Laminates with Sensoric Function René Schmidt Alexander Graf Ricardo Decker Stephan Lede Verena Kräusel Lothar Kroll Wolfram Hardt doi: 10.3390/technologies12040047 Technologies 2024-04-01 Technologies 2024-04-01 12 4
    Article
    47 10.3390/technologies12040047 https://www.mdpi.com/2227-7080/12/4/47
    Technologies, Vol. 12, Pages 46: Enhancing Patient Care in Radiotherapy: Proof-of-Concept of a Monitoring Tool https://www.mdpi.com/2227-7080/12/4/46 Introduction: A monitoring tool, named Oncology Data Management (ODM), was developed in radiotherapy to generate structured information based on data contained in an Oncology Information System (OIS). This study presents the proof-of-concept of the ODM tool and highlights its applications to enhance patient care in radiotherapy. Material & Methods: ODM is a sophisticated SQL query which extracts specific features from the Mosaiq OIS (Elekta, UK) database into an independent structured database. Data from 2016 to 2022 was extracted to enable monitoring of treatment units and evaluation of the quality of patient care. Results: A total of 25,259 treatments were extracted. Treatment machine monitoring revealed a daily 11-treatement difference between two units. ODM showed that the unit with fewer daily treatments performed more complex treatments on diverse locations. In 2019, the implementation of ODM led to the definition of quality indicators and in organizational changes that improved the quality of care. As consequences, for palliative treatments, there was an improvement in the proportion of treatments prepared within 7 calendar days between the scanner and the first treatment session (29.1% before 2020, 40.4% in 2020 and 46.4% after 2020). The study of fractionation in breast treatments exhibited decreased prescription variability after 2019, with distinct patient age categories. Bi-fractionation once a week for larynx prescriptions of 35 × 2.0 Gy achieved an overall treatment duration of 47.0 ± 3.0 calendar days in 2022. Conclusions: ODM enables data extraction from the OIS and provides quantitative tools for improving organization of a department and the quality of patient care in radiotherapy. 2024-03-29 Technologies, Vol. 12, Pages 46: Enhancing Patient Care in Radiotherapy: Proof-of-Concept of a Monitoring Tool

    Technologies doi: 10.3390/technologies12040046

    Authors: Guillaume Beldjoudi Rémi Eugène Vincent Grégoire Ronan Tanguy

    Introduction: A monitoring tool, named Oncology Data Management (ODM), was developed in radiotherapy to generate structured information based on data contained in an Oncology Information System (OIS). This study presents the proof-of-concept of the ODM tool and highlights its applications to enhance patient care in radiotherapy. Material & Methods: ODM is a sophisticated SQL query which extracts specific features from the Mosaiq OIS (Elekta, UK) database into an independent structured database. Data from 2016 to 2022 was extracted to enable monitoring of treatment units and evaluation of the quality of patient care. Results: A total of 25,259 treatments were extracted. Treatment machine monitoring revealed a daily 11-treatement difference between two units. ODM showed that the unit with fewer daily treatments performed more complex treatments on diverse locations. In 2019, the implementation of ODM led to the definition of quality indicators and in organizational changes that improved the quality of care. As consequences, for palliative treatments, there was an improvement in the proportion of treatments prepared within 7 calendar days between the scanner and the first treatment session (29.1% before 2020, 40.4% in 2020 and 46.4% after 2020). The study of fractionation in breast treatments exhibited decreased prescription variability after 2019, with distinct patient age categories. Bi-fractionation once a week for larynx prescriptions of 35 × 2.0 Gy achieved an overall treatment duration of 47.0 ± 3.0 calendar days in 2022. Conclusions: ODM enables data extraction from the OIS and provides quantitative tools for improving organization of a department and the quality of patient care in radiotherapy.

    ]]>
    Enhancing Patient Care in Radiotherapy: Proof-of-Concept of a Monitoring Tool Guillaume Beldjoudi Rémi Eugène Vincent Grégoire Ronan Tanguy doi: 10.3390/technologies12040046 Technologies 2024-03-29 Technologies 2024-03-29 12 4
    Article
    46 10.3390/technologies12040046 https://www.mdpi.com/2227-7080/12/4/46
    Technologies, Vol. 12, Pages 45: An Artificial Bee Colony Algorithm for Coordinated Scheduling of Production Jobs and Flexible Maintenance in Permutation Flowshops https://www.mdpi.com/2227-7080/12/4/45 This research work addresses the integrated scheduling of jobs and flexible (non-systematic) maintenance interventions in permutation flowshop production systems. We propose a coordinated model in which the time intervals between successive maintenance tasks as well as their number are assumed to be non-fixed for each machine on the shopfloor. With such a flexible nature of maintenance activities, the resulting joint schedule is more practical and representative of real-world scenarios. Our goal is to determine the best job permutation in which flexible maintenance activities are properly incorporated. To tackle the NP-hard nature of this problem, an artificial bee colony (ABC) algorithm is developed to minimize the total production time (Makespan). Experiments are conducted utilizing well-known Taillard’s benchmarks, enriched with maintenance data, to compare the proposed algorithm performance against the variable neighbourhood search (VNS) method from the literature. Computational results demonstrate the effectiveness of the proposed algorithm in terms of both solution quality and computational times. 2024-03-25 Technologies, Vol. 12, Pages 45: An Artificial Bee Colony Algorithm for Coordinated Scheduling of Production Jobs and Flexible Maintenance in Permutation Flowshops

    Technologies doi: 10.3390/technologies12040045

    Authors: Asma Ladj Fatima Benbouzid-Si Tayeb Alaeddine Dahamni Mohamed Benbouzid

    This research work addresses the integrated scheduling of jobs and flexible (non-systematic) maintenance interventions in permutation flowshop production systems. We propose a coordinated model in which the time intervals between successive maintenance tasks as well as their number are assumed to be non-fixed for each machine on the shopfloor. With such a flexible nature of maintenance activities, the resulting joint schedule is more practical and representative of real-world scenarios. Our goal is to determine the best job permutation in which flexible maintenance activities are properly incorporated. To tackle the NP-hard nature of this problem, an artificial bee colony (ABC) algorithm is developed to minimize the total production time (Makespan). Experiments are conducted utilizing well-known Taillard’s benchmarks, enriched with maintenance data, to compare the proposed algorithm performance against the variable neighbourhood search (VNS) method from the literature. Computational results demonstrate the effectiveness of the proposed algorithm in terms of both solution quality and computational times.

    ]]>
    An Artificial Bee Colony Algorithm for Coordinated Scheduling of Production Jobs and Flexible Maintenance in Permutation Flowshops Asma Ladj Fatima Benbouzid-Si Tayeb Alaeddine Dahamni Mohamed Benbouzid doi: 10.3390/technologies12040045 Technologies 2024-03-25 Technologies 2024-03-25 12 4
    Article
    45 10.3390/technologies12040045 https://www.mdpi.com/2227-7080/12/4/45
    Technologies, Vol. 12, Pages 44: Blood Pressure Measurement Device Accuracy Evaluation: Statistical Considerations with an Implementation in R https://www.mdpi.com/2227-7080/12/4/44 Inaccuracies from devices for non-invasive blood pressure measurements have been well reported with clinical consequences. International standards, such as ISO 81060-2 and the seminal AAMI/ANSI SP10, define protocols and acceptance criteria for these devices. Prior to applying these standards, a sample size of N >= 85 is mandatory, that is, the number of distinct subcjects used to calculate device inaccuracies. Often, it is not possible to gather such a large sample. Many studies apply these standards with a smaller sample. The objective of the paper is to introduce a methodology that broadens the method first developed by the AAMI Sphygmomanometer Committee for accepting a blood pressure measurement device. We study changes in the acceptance region for various sample sizes using the sampling distribution for proportions and introduce a methodology for estimating the exact probability of the acceptance of a device. This enables the comparison of the accuracies of existing device development techniques even if they were studied with a smaller sample size. The study is useful in assisting BP measurement device manufacturers. To assist clinicians, we present a newly developed “bpAcc” package in R to evaluate acceptance statistics for various sample sizes. 2024-03-25 Technologies, Vol. 12, Pages 44: Blood Pressure Measurement Device Accuracy Evaluation: Statistical Considerations with an Implementation in R

    Technologies doi: 10.3390/technologies12040044

    Authors: Tanvi Chandel Victor Miranda Andrew Lowe Tet Chuan Lee

    Inaccuracies from devices for non-invasive blood pressure measurements have been well reported with clinical consequences. International standards, such as ISO 81060-2 and the seminal AAMI/ANSI SP10, define protocols and acceptance criteria for these devices. Prior to applying these standards, a sample size of N >= 85 is mandatory, that is, the number of distinct subcjects used to calculate device inaccuracies. Often, it is not possible to gather such a large sample. Many studies apply these standards with a smaller sample. The objective of the paper is to introduce a methodology that broadens the method first developed by the AAMI Sphygmomanometer Committee for accepting a blood pressure measurement device. We study changes in the acceptance region for various sample sizes using the sampling distribution for proportions and introduce a methodology for estimating the exact probability of the acceptance of a device. This enables the comparison of the accuracies of existing device development techniques even if they were studied with a smaller sample size. The study is useful in assisting BP measurement device manufacturers. To assist clinicians, we present a newly developed “bpAcc” package in R to evaluate acceptance statistics for various sample sizes.

    ]]>
    Blood Pressure Measurement Device Accuracy Evaluation: Statistical Considerations with an Implementation in R Tanvi Chandel Victor Miranda Andrew Lowe Tet Chuan Lee doi: 10.3390/technologies12040044 Technologies 2024-03-25 Technologies 2024-03-25 12 4
    Article
    44 10.3390/technologies12040044 https://www.mdpi.com/2227-7080/12/4/44
    Technologies, Vol. 12, Pages 43: Applied Deep Learning-Based Crop Yield Prediction: A Systematic Analysis of Current Developments and Potential Challenges https://www.mdpi.com/2227-7080/12/4/43 Agriculture is essential for global income, poverty reduction, and food security, with crop yield being a crucial measure in this field. Traditional crop yield prediction methods, reliant on subjective assessments such as farmers’ experiences, tend to be error-prone and lack precision across vast farming areas, especially in data-scarce regions. Recent advancements in data collection, notably through high-resolution sensors and the use of deep learning (DL), have significantly increased the accuracy and breadth of agricultural data, providing better support for policymakers and administrators. In our study, we conduct a systematic literature review to explore the application of DL in crop yield forecasting, underscoring its growing significance in enhancing yield predictions. Our approach enabled us to identify 92 relevant studies across four major scientific databases: the Directory of Open Access Journals (DOAJ), the Institute of Electrical and Electronics Engineers (IEEE), the Multidisciplinary Digital Publishing Institute (MDPI), and ScienceDirect. These studies, all empirical research published in the last eight years, met stringent selection criteria, including empirical validity, methodological clarity, and a minimum quality score, ensuring their rigorous research standards and relevance. Our in-depth analysis of these papers aimed to synthesize insights on the crops studied, DL models utilized, key input data types, and the specific challenges and prerequisites for accurate DL-based yield forecasting. Our findings reveal that convolutional neural networks and Long Short-Term Memory are the dominant deep learning architectures in crop yield prediction, with a focus on cereals like wheat (Triticum aestivum) and corn (Zea mays). Many studies leverage satellite imagery, but there is a growing trend towards using Unmanned Aerial Vehicles (UAVs) for data collection. Our review synthesizes global research, suggests future directions, and highlights key studies, acknowledging that results may vary across different databases and emphasizing the need for continual updates due to the evolving nature of the field. 2024-03-24 Technologies, Vol. 12, Pages 43: Applied Deep Learning-Based Crop Yield Prediction: A Systematic Analysis of Current Developments and Potential Challenges

    Technologies doi: 10.3390/technologies12040043

    Authors: Khadija Meghraoui Imane Sebari Juergen Pilz Kenza Ait El Kadi Saloua Bensiali

    Agriculture is essential for global income, poverty reduction, and food security, with crop yield being a crucial measure in this field. Traditional crop yield prediction methods, reliant on subjective assessments such as farmers’ experiences, tend to be error-prone and lack precision across vast farming areas, especially in data-scarce regions. Recent advancements in data collection, notably through high-resolution sensors and the use of deep learning (DL), have significantly increased the accuracy and breadth of agricultural data, providing better support for policymakers and administrators. In our study, we conduct a systematic literature review to explore the application of DL in crop yield forecasting, underscoring its growing significance in enhancing yield predictions. Our approach enabled us to identify 92 relevant studies across four major scientific databases: the Directory of Open Access Journals (DOAJ), the Institute of Electrical and Electronics Engineers (IEEE), the Multidisciplinary Digital Publishing Institute (MDPI), and ScienceDirect. These studies, all empirical research published in the last eight years, met stringent selection criteria, including empirical validity, methodological clarity, and a minimum quality score, ensuring their rigorous research standards and relevance. Our in-depth analysis of these papers aimed to synthesize insights on the crops studied, DL models utilized, key input data types, and the specific challenges and prerequisites for accurate DL-based yield forecasting. Our findings reveal that convolutional neural networks and Long Short-Term Memory are the dominant deep learning architectures in crop yield prediction, with a focus on cereals like wheat (Triticum aestivum) and corn (Zea mays). Many studies leverage satellite imagery, but there is a growing trend towards using Unmanned Aerial Vehicles (UAVs) for data collection. Our review synthesizes global research, suggests future directions, and highlights key studies, acknowledging that results may vary across different databases and emphasizing the need for continual updates due to the evolving nature of the field.

    ]]>
    Applied Deep Learning-Based Crop Yield Prediction: A Systematic Analysis of Current Developments and Potential Challenges Khadija Meghraoui Imane Sebari Juergen Pilz Kenza Ait El Kadi Saloua Bensiali doi: 10.3390/technologies12040043 Technologies 2024-03-24 Technologies 2024-03-24 12 4
    Review
    43 10.3390/technologies12040043 https://www.mdpi.com/2227-7080/12/4/43
    Technologies, Vol. 12, Pages 42: Performance Assessment of Different Sustainable Energy Systems Using Multiple-Criteria Decision-Making Model and Self-Organizing Maps https://www.mdpi.com/2227-7080/12/3/42 The surging demand for electricity, propelled by the widespread adoption of intelligent grids and heightened consumer interaction with electricity demand and pricing, underscores the imperative for precise prognostication of optimal power plant utilization. To confront this challenge, a dataset centered on issue-centric power plans is meticulously crafted. This dataset encapsulates pivotal facets indispensable for attaining sustainable power generation, including meager gas emissions, installation cost, low maintenance cost, elevated power generation, and copious resource availability. The selection of an optimal power plant entails a multifaceted decision-making process, demanding a systematic approach. Our research advocates the amalgamation of multiple-criteria decision-making (MCDM) models with self-organizing maps to gauge the efficacy of diverse sustainable energy systems. The examination discerns solar energy as the preeminent MCDM criterion, securing the apex position with a score of 83.4%, attributable to its ample resource availability, considerable energy generation, nil greenhouse gas emissions, and commendable efficiency. Wind and hydroelectric power closely trail, registering scores of 75.3% and 74.5%, respectively, along with other energy sources. The analysis underscores the supremacy of the renewable energy sources, particularly solar and wind, in fulfilling sustainability objectives and scrutinizing factors such as cost, resource availability, and the environmental impact. The proposed methodology empowers stakeholders to make judicious decisions, accentuating facets that are required for more sustainable and resilient power infrastructure. 2024-03-19 Technologies, Vol. 12, Pages 42: Performance Assessment of Different Sustainable Energy Systems Using Multiple-Criteria Decision-Making Model and Self-Organizing Maps

    Technologies doi: 10.3390/technologies12030042

    Authors: Satyabrata Dash Sujata Chakravarty Nimay Chandra Giri Umashankar Ghugar Georgios Fotis

    The surging demand for electricity, propelled by the widespread adoption of intelligent grids and heightened consumer interaction with electricity demand and pricing, underscores the imperative for precise prognostication of optimal power plant utilization. To confront this challenge, a dataset centered on issue-centric power plans is meticulously crafted. This dataset encapsulates pivotal facets indispensable for attaining sustainable power generation, including meager gas emissions, installation cost, low maintenance cost, elevated power generation, and copious resource availability. The selection of an optimal power plant entails a multifaceted decision-making process, demanding a systematic approach. Our research advocates the amalgamation of multiple-criteria decision-making (MCDM) models with self-organizing maps to gauge the efficacy of diverse sustainable energy systems. The examination discerns solar energy as the preeminent MCDM criterion, securing the apex position with a score of 83.4%, attributable to its ample resource availability, considerable energy generation, nil greenhouse gas emissions, and commendable efficiency. Wind and hydroelectric power closely trail, registering scores of 75.3% and 74.5%, respectively, along with other energy sources. The analysis underscores the supremacy of the renewable energy sources, particularly solar and wind, in fulfilling sustainability objectives and scrutinizing factors such as cost, resource availability, and the environmental impact. The proposed methodology empowers stakeholders to make judicious decisions, accentuating facets that are required for more sustainable and resilient power infrastructure.

    ]]>
    Performance Assessment of Different Sustainable Energy Systems Using Multiple-Criteria Decision-Making Model and Self-Organizing Maps Satyabrata Dash Sujata Chakravarty Nimay Chandra Giri Umashankar Ghugar Georgios Fotis doi: 10.3390/technologies12030042 Technologies 2024-03-19 Technologies 2024-03-19 12 3
    Article
    42 10.3390/technologies12030042 https://www.mdpi.com/2227-7080/12/3/42
    Technologies, Vol. 12, Pages 41: Implementation of a Wireless Sensor Network for Environmental Measurements https://www.mdpi.com/2227-7080/12/3/41 Nowadays, the need to monitor different physical variables is constantly increasing and can be used in different applications, from humidity monitoring to disease detection in living beings, using a local or wireless sensor network (WSN). The Internet of Things has become a valuable approach to climate monitoring, daily parcel monitoring, early disease detection, crop plant counting, and risk assessment. Herein, an autonomous energy wireless sensor network for monitoring environmental variables is proposed. The network’s tree topology configuration, which involves master and slave modules, is managed by microcontrollers embedded with sensors, constituting a key part of the WSN architecture. The system’s slave modules are equipped with sensors for temperature, humidity, gas, and light detection, along with a photovoltaic cell to energize the system, and a WiFi module for data transmission. The receiver incorporates a user interface and the necessary computing components for efficient data handling. In an open-field configuration, the transceiver range of the proposed system reaches up to 750 m per module. The advantages of this approach are its scalability, energy efficiency, and the system’s ability to provide real-time environmental monitoring over a large area, which is particularly beneficial for applications in precision agriculture and environmental management. 2024-03-16 Technologies, Vol. 12, Pages 41: Implementation of a Wireless Sensor Network for Environmental Measurements

    Technologies doi: 10.3390/technologies12030041

    Authors: Rosa M. Woo-García José M. Pérez-Vista Adrián Sánchez-Vidal Agustín L. Herrera-May Edith Osorio-de-la-Rosa Felipe Caballero-Briones Francisco López-Huerta

    Nowadays, the need to monitor different physical variables is constantly increasing and can be used in different applications, from humidity monitoring to disease detection in living beings, using a local or wireless sensor network (WSN). The Internet of Things has become a valuable approach to climate monitoring, daily parcel monitoring, early disease detection, crop plant counting, and risk assessment. Herein, an autonomous energy wireless sensor network for monitoring environmental variables is proposed. The network’s tree topology configuration, which involves master and slave modules, is managed by microcontrollers embedded with sensors, constituting a key part of the WSN architecture. The system’s slave modules are equipped with sensors for temperature, humidity, gas, and light detection, along with a photovoltaic cell to energize the system, and a WiFi module for data transmission. The receiver incorporates a user interface and the necessary computing components for efficient data handling. In an open-field configuration, the transceiver range of the proposed system reaches up to 750 m per module. The advantages of this approach are its scalability, energy efficiency, and the system’s ability to provide real-time environmental monitoring over a large area, which is particularly beneficial for applications in precision agriculture and environmental management.

    ]]>
    Implementation of a Wireless Sensor Network for Environmental Measurements Rosa M. Woo-García José M. Pérez-Vista Adrián Sánchez-Vidal Agustín L. Herrera-May Edith Osorio-de-la-Rosa Felipe Caballero-Briones Francisco López-Huerta doi: 10.3390/technologies12030041 Technologies 2024-03-16 Technologies 2024-03-16 12 3
    Article
    41 10.3390/technologies12030041 https://www.mdpi.com/2227-7080/12/3/41
    Technologies, Vol. 12, Pages 40: Applications of 3D Reconstruction in Virtual Reality-Based Teleoperation: A Review in the Mining Industry https://www.mdpi.com/2227-7080/12/3/40 Although multiview platforms have enhanced work efficiency in mining teleoperation systems, they also induce “cognitive tunneling” and depth-detection issues for operators. These issues inadvertently focus their attention on a restricted central view. Fully immersive virtual reality (VR) has recently attracted the attention of specialists in the mining industry to address these issues. Nevertheless, developing VR teleoperation systems remains a formidable challenge, particularly in achieving a realistic 3D model of the environment. This study investigates the existing gap in fully immersive teleoperation systems within the mining industry, aiming to identify the most optimal methods for their development and ensure operator’s safety. To achieve this purpose, a literature search is employed to identify and extract information from the most relevant sources. The most advanced teleoperation systems are examined by focusing on their visualization types. Then, various 3D reconstruction techniques applicable to mining VR teleoperation are investigated, and their data acquisition methods, sensor technologies, and algorithms are analyzed. Ultimately, the study discusses challenges associated with 3D reconstruction techniques for mining teleoperation. The findings demonstrated that the real-time 3D reconstruction of underground mining environments primarily involves depth-based techniques. In contrast, point cloud generation techniques can mostly be employed for 3D reconstruction in open-pit mining operations. 2024-03-15 Technologies, Vol. 12, Pages 40: Applications of 3D Reconstruction in Virtual Reality-Based Teleoperation: A Review in the Mining Industry

    Technologies doi: 10.3390/technologies12030040

    Authors: Alireza Kamran-Pishhesari Amin Moniri-Morad Javad Sattarvand

    Although multiview platforms have enhanced work efficiency in mining teleoperation systems, they also induce “cognitive tunneling” and depth-detection issues for operators. These issues inadvertently focus their attention on a restricted central view. Fully immersive virtual reality (VR) has recently attracted the attention of specialists in the mining industry to address these issues. Nevertheless, developing VR teleoperation systems remains a formidable challenge, particularly in achieving a realistic 3D model of the environment. This study investigates the existing gap in fully immersive teleoperation systems within the mining industry, aiming to identify the most optimal methods for their development and ensure operator’s safety. To achieve this purpose, a literature search is employed to identify and extract information from the most relevant sources. The most advanced teleoperation systems are examined by focusing on their visualization types. Then, various 3D reconstruction techniques applicable to mining VR teleoperation are investigated, and their data acquisition methods, sensor technologies, and algorithms are analyzed. Ultimately, the study discusses challenges associated with 3D reconstruction techniques for mining teleoperation. The findings demonstrated that the real-time 3D reconstruction of underground mining environments primarily involves depth-based techniques. In contrast, point cloud generation techniques can mostly be employed for 3D reconstruction in open-pit mining operations.

    ]]>
    Applications of 3D Reconstruction in Virtual Reality-Based Teleoperation: A Review in the Mining Industry Alireza Kamran-Pishhesari Amin Moniri-Morad Javad Sattarvand doi: 10.3390/technologies12030040 Technologies 2024-03-15 Technologies 2024-03-15 12 3
    Review
    40 10.3390/technologies12030040 https://www.mdpi.com/2227-7080/12/3/40
    Technologies, Vol. 12, Pages 39: Reinforcement-Learning-Based Virtual Inertia Controller for Frequency Support in Islanded Microgrids https://www.mdpi.com/2227-7080/12/3/39 As the world grapples with the energy crisis, integrating renewable energy sources into the power grid has become increasingly crucial. Microgrids have emerged as a vital solution to this challenge. However, the reliance on renewable energy sources in microgrids often leads to low inertia. Renewable energy sources interfaced with the network through interlinking converters lack the inertia of conventional synchronous generators, and hence, need to provide frequency support through virtual inertia techniques. This paper presents a new control algorithm that utilizes the reinforcement learning agents Twin Delayed Deep Deterministic Policy Gradient (TD3) and Deep Deterministic Policy Gradient (DDPG) to support the frequency in low-inertia microgrids. The RL agents are trained using the system-linearized model and then extended to the nonlinear model to reduce the computational burden. The proposed system consists of an AC–DC microgrid comprising a renewable energy source on the DC microgrid, along with constant and resistive loads. On the AC microgrid side, a synchronous generator is utilized to represent the low inertia of the grid, which is accompanied by dynamic and static loads. The model of the system is developed and verified using Matlab/Simulink and the reinforcement learning toolbox. The system performance with the proposed AI-based methods is compared to conventional low-pass and high-pass filter (LPF and HPF) controllers. 2024-03-15 Technologies, Vol. 12, Pages 39: Reinforcement-Learning-Based Virtual Inertia Controller for Frequency Support in Islanded Microgrids

    Technologies doi: 10.3390/technologies12030039

    Authors: Mohamed A. Afifi Mostafa I. Marei Ahmed M. I. Mohamad

    As the world grapples with the energy crisis, integrating renewable energy sources into the power grid has become increasingly crucial. Microgrids have emerged as a vital solution to this challenge. However, the reliance on renewable energy sources in microgrids often leads to low inertia. Renewable energy sources interfaced with the network through interlinking converters lack the inertia of conventional synchronous generators, and hence, need to provide frequency support through virtual inertia techniques. This paper presents a new control algorithm that utilizes the reinforcement learning agents Twin Delayed Deep Deterministic Policy Gradient (TD3) and Deep Deterministic Policy Gradient (DDPG) to support the frequency in low-inertia microgrids. The RL agents are trained using the system-linearized model and then extended to the nonlinear model to reduce the computational burden. The proposed system consists of an AC–DC microgrid comprising a renewable energy source on the DC microgrid, along with constant and resistive loads. On the AC microgrid side, a synchronous generator is utilized to represent the low inertia of the grid, which is accompanied by dynamic and static loads. The model of the system is developed and verified using Matlab/Simulink and the reinforcement learning toolbox. The system performance with the proposed AI-based methods is compared to conventional low-pass and high-pass filter (LPF and HPF) controllers.

    ]]>
    Reinforcement-Learning-Based Virtual Inertia Controller for Frequency Support in Islanded Microgrids Mohamed A. Afifi Mostafa I. Marei Ahmed M. I. Mohamad doi: 10.3390/technologies12030039 Technologies 2024-03-15 Technologies 2024-03-15 12 3
    Article
    39 10.3390/technologies12030039 https://www.mdpi.com/2227-7080/12/3/39
    Technologies, Vol. 12, Pages 38: Examining the Landscape of Cognitive Fatigue Detection: A Comprehensive Survey https://www.mdpi.com/2227-7080/12/3/38 Cognitive fatigue, a state of reduced mental capacity arising from prolonged cognitive activity, poses significant challenges in various domains, from road safety to workplace productivity. Accurately detecting and mitigating cognitive fatigue is crucial for ensuring optimal performance and minimizing potential risks. This paper presents a comprehensive survey of the current landscape in cognitive fatigue detection. We systematically review various approaches, encompassing physiological, behavioral, and performance-based measures, for robust and objective fatigue detection. The paper further analyzes different challenges, including the lack of standardized ground truth and the need for context-aware fatigue assessment. This survey aims to serve as a valuable resource for researchers and practitioners seeking to understand and address the multifaceted challenge of cognitive fatigue detection. 2024-03-11 Technologies, Vol. 12, Pages 38: Examining the Landscape of Cognitive Fatigue Detection: A Comprehensive Survey

    Technologies doi: 10.3390/technologies12030038

    Authors: Enamul Karim Hamza Reza Pavel Sama Nikanfar Aref Hebri Ayon Roy Harish Ram Nambiappan Ashish Jaiswal Glenn R. Wylie Fillia Makedon

    Cognitive fatigue, a state of reduced mental capacity arising from prolonged cognitive activity, poses significant challenges in various domains, from road safety to workplace productivity. Accurately detecting and mitigating cognitive fatigue is crucial for ensuring optimal performance and minimizing potential risks. This paper presents a comprehensive survey of the current landscape in cognitive fatigue detection. We systematically review various approaches, encompassing physiological, behavioral, and performance-based measures, for robust and objective fatigue detection. The paper further analyzes different challenges, including the lack of standardized ground truth and the need for context-aware fatigue assessment. This survey aims to serve as a valuable resource for researchers and practitioners seeking to understand and address the multifaceted challenge of cognitive fatigue detection.

    ]]>
    Examining the Landscape of Cognitive Fatigue Detection: A Comprehensive Survey Enamul Karim Hamza Reza Pavel Sama Nikanfar Aref Hebri Ayon Roy Harish Ram Nambiappan Ashish Jaiswal Glenn R. Wylie Fillia Makedon doi: 10.3390/technologies12030038 Technologies 2024-03-11 Technologies 2024-03-11 12 3
    Review
    38 10.3390/technologies12030038 https://www.mdpi.com/2227-7080/12/3/38
    Technologies, Vol. 12, Pages 37: Pioneering a Framework for Robust Telemedicine Technology Assessment (Telemechron Study) https://www.mdpi.com/2227-7080/12/3/37 The field of technology assessment in telemedicine is garnering increasing attention due to the widespread adoption of this discipline and its complex and heterogeneous system characteristics, making its application complex. As part of a national telemedicine project, the National Center for Innovative Technologies in Public Health at the Italian National Institute of Health played the role of promoting and utilizing technology assessment tools within partnership projects. This study aims to outline the design, development, and application of assessment methodologies within the telemedicine project proposed by the ISS team, utilizing a specific framework developed within the project. The sub-objectives include evaluating the proposed methodology’s effectiveness and feasibility, gathering feedback for improvement, and assessing its impact on various project components. The study emphasizes the multifaceted nature of action domains and underscores the crucial role of technology assessments in telemedicine, highlighting its impact across diverse realms through iterative interaction cycles with project partners. Both the impact and the acceptance of the methodology have been assessed by means of specific computer-aided web interviewing (CAWI) tools. The proposed methodology received significant acceptance, providing valuable insights for refining future frameworks. The impact assessment revealed a consistent quality improvement trend in the project’s products, evident in methodological consolidations. The overall message encourages similar initiatives in this domain, shedding light on the intricacies of technology assessment implementation. In conclusion, the study serves as a comprehensive outcome of the national telemedicine project, witnessing the success and adaptability of the technology assessment methodology and advocating for further exploration and implementation in analogous contexts. 2024-03-08 Technologies, Vol. 12, Pages 37: Pioneering a Framework for Robust Telemedicine Technology Assessment (Telemechron Study)

    Technologies doi: 10.3390/technologies12030037

    Authors: Sandra Morelli Carla Daniele Giuseppe D’Avenio Mauro Grigioni Daniele Giansanti

    The field of technology assessment in telemedicine is garnering increasing attention due to the widespread adoption of this discipline and its complex and heterogeneous system characteristics, making its application complex. As part of a national telemedicine project, the National Center for Innovative Technologies in Public Health at the Italian National Institute of Health played the role of promoting and utilizing technology assessment tools within partnership projects. This study aims to outline the design, development, and application of assessment methodologies within the telemedicine project proposed by the ISS team, utilizing a specific framework developed within the project. The sub-objectives include evaluating the proposed methodology’s effectiveness and feasibility, gathering feedback for improvement, and assessing its impact on various project components. The study emphasizes the multifaceted nature of action domains and underscores the crucial role of technology assessments in telemedicine, highlighting its impact across diverse realms through iterative interaction cycles with project partners. Both the impact and the acceptance of the methodology have been assessed by means of specific computer-aided web interviewing (CAWI) tools. The proposed methodology received significant acceptance, providing valuable insights for refining future frameworks. The impact assessment revealed a consistent quality improvement trend in the project’s products, evident in methodological consolidations. The overall message encourages similar initiatives in this domain, shedding light on the intricacies of technology assessment implementation. In conclusion, the study serves as a comprehensive outcome of the national telemedicine project, witnessing the success and adaptability of the technology assessment methodology and advocating for further exploration and implementation in analogous contexts.

    ]]>
    Pioneering a Framework for Robust Telemedicine Technology Assessment (Telemechron Study) Sandra Morelli Carla Daniele Giuseppe D’Avenio Mauro Grigioni Daniele Giansanti doi: 10.3390/technologies12030037 Technologies 2024-03-08 Technologies 2024-03-08 12 3
    Article
    37 10.3390/technologies12030037 https://www.mdpi.com/2227-7080/12/3/37
    Technologies, Vol. 12, Pages 36: Energy Sustainability Indicators for the Use of Biomass as Fuel for the Sugar Industry https://www.mdpi.com/2227-7080/12/3/36 There are numerous analytical and/or computational tools for evaluating the energetic sustainability of biomass in the sugar industry. However, the simultaneous integration of the energetic–exergetic and emergetic criteria for such evaluation is still insufficient. The objective of the present work is to propose a range of indicators to evaluate the sustainability of the use of biomass as fuel in the sugar industry. For this purpose, energy, exergy, and emergy evaluation tools were theoretically used as sustainability indicators. They were validated in five variants of different biomass and their mixtures in two studies of technologies used in Cuba for the sugar industry. As a result, the energy method showed, for all variants, an increase in efficiency of about 5% in the VU-40 technology compared to the Retal technology. There is an increase in energy efficiency when considering AHRs of 2.8% or Marabu (Dichrostachys cinerea) (5.3%) compared to the V1 variant. Through the study of the exergetic efficiency, an increase of 2% was determined in both technologies for the case of the V1 variant, and an increase in efficiency is observed in the V2 variant of 5% and the V3 variant (5.6%) over the V1 variant. The emergetic method showed superior results for the VU-40 technology over the Retal technology due to higher fuel utilization. In the case of the V1 variant, there was a 7% increase in the renewability ratio and an 11.07% increase in the sustainability index. This is because more energy is produced per unit of environmental load. 2024-03-08 Technologies, Vol. 12, Pages 36: Energy Sustainability Indicators for the Use of Biomass as Fuel for the Sugar Industry

    Technologies doi: 10.3390/technologies12030036

    Authors: Reinier Jiménez Borges Luis Angel Iturralde Carrera Eduardo Julio Lopez Bastida José R. García-Martínez Roberto V. Carrillo-Serrano Juvenal Rodríguez-Reséndiz

    There are numerous analytical and/or computational tools for evaluating the energetic sustainability of biomass in the sugar industry. However, the simultaneous integration of the energetic–exergetic and emergetic criteria for such evaluation is still insufficient. The objective of the present work is to propose a range of indicators to evaluate the sustainability of the use of biomass as fuel in the sugar industry. For this purpose, energy, exergy, and emergy evaluation tools were theoretically used as sustainability indicators. They were validated in five variants of different biomass and their mixtures in two studies of technologies used in Cuba for the sugar industry. As a result, the energy method showed, for all variants, an increase in efficiency of about 5% in the VU-40 technology compared to the Retal technology. There is an increase in energy efficiency when considering AHRs of 2.8% or Marabu (Dichrostachys cinerea) (5.3%) compared to the V1 variant. Through the study of the exergetic efficiency, an increase of 2% was determined in both technologies for the case of the V1 variant, and an increase in efficiency is observed in the V2 variant of 5% and the V3 variant (5.6%) over the V1 variant. The emergetic method showed superior results for the VU-40 technology over the Retal technology due to higher fuel utilization. In the case of the V1 variant, there was a 7% increase in the renewability ratio and an 11.07% increase in the sustainability index. This is because more energy is produced per unit of environmental load.

    ]]>
    Energy Sustainability Indicators for the Use of Biomass as Fuel for the Sugar Industry Reinier Jiménez Borges Luis Angel Iturralde Carrera Eduardo Julio Lopez Bastida José R. García-Martínez Roberto V. Carrillo-Serrano Juvenal Rodríguez-Reséndiz doi: 10.3390/technologies12030036 Technologies 2024-03-08 Technologies 2024-03-08 12 3
    Article
    36 10.3390/technologies12030036 https://www.mdpi.com/2227-7080/12/3/36
    Technologies, Vol. 12, Pages 35: A 28 GHz Highly Linear Up-Conversion Mixer for 5G Cellular Communications https://www.mdpi.com/2227-7080/12/3/35 In this paper, we present a highly linear direct in-phase/quadrature (I/Q) up-conversion mixer for 5G millimeter-wave applications. To enhance the linearity of the mixer, we propose a complementary derivative superposition technique with pre-distortion. The proposed up-conversion mixer consists of a quadrature generator, LO buffer amplifiers, and an I/Q up-conversion mixer core and achieves an output third-order intercept point of 15.7 dBm and an output 1 dB compression point of 2 dBm at 27.6 GHz, while it consumes 15 mW at a supply voltage of 1 V. The conversion gain is 11.4 dB and the LO leakage and image rejection ratio are −56 dBc and 61 dB, respectively, in the measurement. The proposed I/Q up-conversion mixer is suitable for 5G cellular communication systems. 2024-03-07 Technologies, Vol. 12, Pages 35: A 28 GHz Highly Linear Up-Conversion Mixer for 5G Cellular Communications

    Technologies doi: 10.3390/technologies12030035

    Authors: Chul-Woo Byeon

    In this paper, we present a highly linear direct in-phase/quadrature (I/Q) up-conversion mixer for 5G millimeter-wave applications. To enhance the linearity of the mixer, we propose a complementary derivative superposition technique with pre-distortion. The proposed up-conversion mixer consists of a quadrature generator, LO buffer amplifiers, and an I/Q up-conversion mixer core and achieves an output third-order intercept point of 15.7 dBm and an output 1 dB compression point of 2 dBm at 27.6 GHz, while it consumes 15 mW at a supply voltage of 1 V. The conversion gain is 11.4 dB and the LO leakage and image rejection ratio are −56 dBc and 61 dB, respectively, in the measurement. The proposed I/Q up-conversion mixer is suitable for 5G cellular communication systems.

    ]]>
    A 28 GHz Highly Linear Up-Conversion Mixer for 5G Cellular Communications Chul-Woo Byeon doi: 10.3390/technologies12030035 Technologies 2024-03-07 Technologies 2024-03-07 12 3
    Communication
    35 10.3390/technologies12030035 https://www.mdpi.com/2227-7080/12/3/35
    Technologies, Vol. 12, Pages 34: Reinforcement Learning as an Approach to Train Multiplayer First-Person Shooter Game Agents https://www.mdpi.com/2227-7080/12/3/34 Artificial Intelligence bots are extensively used in multiplayer First-Person Shooter (FPS) games. By using Machine Learning techniques, we can improve their performance and bring them to human skill levels. In this work, we focused on comparing and combining two Reinforcement Learning training architectures, Curriculum Learning and Behaviour Cloning, applied to an FPS developed in the Unity Engine. We have created four teams of three agents each: one team for Curriculum Learning, one for Behaviour Cloning, and another two for two different methods of combining Curriculum Learning and Behaviour Cloning. After completing the training, each agent was matched to battle against another agent of a different team until each pairing had five wins or ten time-outs. In the end, results showed that the agents trained with Curriculum Learning achieved better performance than the ones trained with Behaviour Cloning by a matter of 23.67% more average victories in one case. In terms of the combination attempts, not only did the agents trained with both devised methods had problems during training, but they also achieved insufficient results in the battle, with an average of 0 wins. 2024-03-05 Technologies, Vol. 12, Pages 34: Reinforcement Learning as an Approach to Train Multiplayer First-Person Shooter Game Agents

    Technologies doi: 10.3390/technologies12030034

    Authors: Pedro Almeida Vítor Carvalho Alberto Simões

    Artificial Intelligence bots are extensively used in multiplayer First-Person Shooter (FPS) games. By using Machine Learning techniques, we can improve their performance and bring them to human skill levels. In this work, we focused on comparing and combining two Reinforcement Learning training architectures, Curriculum Learning and Behaviour Cloning, applied to an FPS developed in the Unity Engine. We have created four teams of three agents each: one team for Curriculum Learning, one for Behaviour Cloning, and another two for two different methods of combining Curriculum Learning and Behaviour Cloning. After completing the training, each agent was matched to battle against another agent of a different team until each pairing had five wins or ten time-outs. In the end, results showed that the agents trained with Curriculum Learning achieved better performance than the ones trained with Behaviour Cloning by a matter of 23.67% more average victories in one case. In terms of the combination attempts, not only did the agents trained with both devised methods had problems during training, but they also achieved insufficient results in the battle, with an average of 0 wins.

    ]]>
    Reinforcement Learning as an Approach to Train Multiplayer First-Person Shooter Game Agents Pedro Almeida Vítor Carvalho Alberto Simões doi: 10.3390/technologies12030034 Technologies 2024-03-05 Technologies 2024-03-05 12 3
    Article
    34 10.3390/technologies12030034 https://www.mdpi.com/2227-7080/12/3/34
    Technologies, Vol. 12, Pages 33: A Comparison between Kinematic Models for Robotic Needle Insertion with Application into Transperineal Prostate Biopsy https://www.mdpi.com/2227-7080/12/3/33 Transperineal prostate biopsy is the most reliable technique for detecting prostate cancer, and robot-assisted needle insertion has the potential to improve the accuracy of this procedure. Modeling the interaction between a bevel-tip needle and the tissue, considering tissue heterogeneity, needle bending, and tissue/organ deformation and movement is a required step to enable robotic needle insertion. Even if several models exist, they have never been compared on experimental grounds. Based on this motivation, this paper proposes an experimental comparison for kinematic models of needle insertion, considering different needle insertion speeds and different degrees of tissue stiffness. The experimental comparison considers automated insertions of needles into transparent silicone phantoms under stereo-image guidance. The comparison evaluates the accuracy of existing models in predicting needle deformation. 2024-03-01 Technologies, Vol. 12, Pages 33: A Comparison between Kinematic Models for Robotic Needle Insertion with Application into Transperineal Prostate Biopsy

    Technologies doi: 10.3390/technologies12030033

    Authors: Chiara Zandonà Andrea Roberti Davide Costanzi Burçin Gül Özge Akbulut Paolo Fiorini Andrea Calanca

    Transperineal prostate biopsy is the most reliable technique for detecting prostate cancer, and robot-assisted needle insertion has the potential to improve the accuracy of this procedure. Modeling the interaction between a bevel-tip needle and the tissue, considering tissue heterogeneity, needle bending, and tissue/organ deformation and movement is a required step to enable robotic needle insertion. Even if several models exist, they have never been compared on experimental grounds. Based on this motivation, this paper proposes an experimental comparison for kinematic models of needle insertion, considering different needle insertion speeds and different degrees of tissue stiffness. The experimental comparison considers automated insertions of needles into transparent silicone phantoms under stereo-image guidance. The comparison evaluates the accuracy of existing models in predicting needle deformation.

    ]]>
    A Comparison between Kinematic Models for Robotic Needle Insertion with Application into Transperineal Prostate Biopsy Chiara Zandonà Andrea Roberti Davide Costanzi Burçin Gül Özge Akbulut Paolo Fiorini Andrea Calanca doi: 10.3390/technologies12030033 Technologies 2024-03-01 Technologies 2024-03-01 12 3
    Article
    33 10.3390/technologies12030033 https://www.mdpi.com/2227-7080/12/3/33
    Technologies, Vol. 12, Pages 32: Measurement of Light-Duty Vehicle Exhaust Emissions with Light Absorption Spectrometers https://www.mdpi.com/2227-7080/12/3/32 Light-duty vehicle emission regulations worldwide set limits for the following gaseous pollutants: carbon monoxide (CO), nitric oxides (NOX), hydrocarbons (HCs), and/or non-methane hydrocarbons (NMHCs). Carbon dioxide (CO2) is indirectly limited by fleet CO2 or fuel consumption targets. Measurements are carried out at the dilution tunnel with “standard” laboratory-grade instruments following well-defined principles of operation: non-dispersive infrared (NDIR) analyzers for CO and CO2, flame ionization detectors (FIDs) for hydrocarbons, and chemiluminescence analyzers (CLAs) or non-dispersive ultraviolet detectors (NDUVs) for NOX. In the United States in 2012 and in China in 2020, with Stage 6, nitrous oxide (N2O) was also included. Brazil is phasing in NH3 in its regulation. Alternative instruments that can measure some or all these pollutants include Fourier transform infrared (FTIR)- and laser absorption spectroscopy (LAS)-based instruments. In the second category, quantum cascade laser (QCL) spectroscopy in the mid-infrared area or laser diode spectroscopy (LDS) in the near-infrared area, such as tunable diode laser absorption spectroscopy (TDLAS), are included. According to current regulations and technical specifications, NH3 is the only component that has to be measured at the tailpipe to avoid ammonia losses due to its hydrophilic properties and adsorption on the transfer lines. There are not many studies that have evaluated such instruments, in particular those for “non-regulated” worldwide pollutants. For this reason, we compared laboratory-grade “standard” analyzers with FTIR- and TDLAS-based instruments measuring NH3. One diesel and two gasoline vehicles at different ambient temperatures and with different test cycles produced emissions in a wide range. In general, the agreement among the instruments was very good (in most cases, within ±10%), confirming their suitability for the measurement of pollutants. 2024-02-28 Technologies, Vol. 12, Pages 32: Measurement of Light-Duty Vehicle Exhaust Emissions with Light Absorption Spectrometers

    Technologies doi: 10.3390/technologies12030032

    Authors: Barouch Giechaskiel Anastasios Melas Jacopo Franzetti Victor Valverde Michaël Clairotte Ricardo Suarez-Bertoa

    Light-duty vehicle emission regulations worldwide set limits for the following gaseous pollutants: carbon monoxide (CO), nitric oxides (NOX), hydrocarbons (HCs), and/or non-methane hydrocarbons (NMHCs). Carbon dioxide (CO2) is indirectly limited by fleet CO2 or fuel consumption targets. Measurements are carried out at the dilution tunnel with “standard” laboratory-grade instruments following well-defined principles of operation: non-dispersive infrared (NDIR) analyzers for CO and CO2, flame ionization detectors (FIDs) for hydrocarbons, and chemiluminescence analyzers (CLAs) or non-dispersive ultraviolet detectors (NDUVs) for NOX. In the United States in 2012 and in China in 2020, with Stage 6, nitrous oxide (N2O) was also included. Brazil is phasing in NH3 in its regulation. Alternative instruments that can measure some or all these pollutants include Fourier transform infrared (FTIR)- and laser absorption spectroscopy (LAS)-based instruments. In the second category, quantum cascade laser (QCL) spectroscopy in the mid-infrared area or laser diode spectroscopy (LDS) in the near-infrared area, such as tunable diode laser absorption spectroscopy (TDLAS), are included. According to current regulations and technical specifications, NH3 is the only component that has to be measured at the tailpipe to avoid ammonia losses due to its hydrophilic properties and adsorption on the transfer lines. There are not many studies that have evaluated such instruments, in particular those for “non-regulated” worldwide pollutants. For this reason, we compared laboratory-grade “standard” analyzers with FTIR- and TDLAS-based instruments measuring NH3. One diesel and two gasoline vehicles at different ambient temperatures and with different test cycles produced emissions in a wide range. In general, the agreement among the instruments was very good (in most cases, within ±10%), confirming their suitability for the measurement of pollutants.

    ]]>
    Measurement of Light-Duty Vehicle Exhaust Emissions with Light Absorption Spectrometers Barouch Giechaskiel Anastasios Melas Jacopo Franzetti Victor Valverde Michaël Clairotte Ricardo Suarez-Bertoa doi: 10.3390/technologies12030032 Technologies 2024-02-28 Technologies 2024-02-28 12 3
    Article
    32 10.3390/technologies12030032 https://www.mdpi.com/2227-7080/12/3/32
    Technologies, Vol. 12, Pages 31: Visualization of Spatial–Temporal Epidemiological Data: A Scoping Review https://www.mdpi.com/2227-7080/12/3/31 In recent years, the proliferation of health data sources due to computer technologies has prompted the use of visualization techniques to tackle epidemiological challenges. However, existing reviews lack a specific focus on the spatial and temporal analysis of epidemiological data using visualization tools. This study aims to address this gap by conducting a scoping review following the PRISMA-ScR guidelines, examining the literature from 2000 to 2024 on spatial–temporal visualization techniques when applied to epidemics, across five databases: PubMed, IEEE Xplore, Scopus, Google Scholar, and ACM Digital Library until 24 January 2024. Among 1312 papers reviewed, 114 were selected, emphasizing aggregate measures, web platform tools, and geospatial data representation, particularly favoring choropleth maps and extended charts. Visualization techniques were predominantly utilized for real-time data presentation, trend analysis, and predictions. Evaluation methods, categorized into standard methodology, user experience, task efficiency, and accuracy, were observed. Although various open-access datasets were available, only a few were commonly used, mainly those related to COVID-19. This study sheds light on the current trends in visualizing epidemiological data over the past 24 years, highlighting the gaps in standardized evaluation methodologies and the limited exploration of individual epidemiological data and diseases acquired in hospitals during epidemics. 2024-02-28 Technologies, Vol. 12, Pages 31: Visualization of Spatial–Temporal Epidemiological Data: A Scoping Review

    Technologies doi: 10.3390/technologies12030031

    Authors: Denisse Kim Bernardo Cánovas-Segura Manuel Campos Jose M. Juarez

    In recent years, the proliferation of health data sources due to computer technologies has prompted the use of visualization techniques to tackle epidemiological challenges. However, existing reviews lack a specific focus on the spatial and temporal analysis of epidemiological data using visualization tools. This study aims to address this gap by conducting a scoping review following the PRISMA-ScR guidelines, examining the literature from 2000 to 2024 on spatial–temporal visualization techniques when applied to epidemics, across five databases: PubMed, IEEE Xplore, Scopus, Google Scholar, and ACM Digital Library until 24 January 2024. Among 1312 papers reviewed, 114 were selected, emphasizing aggregate measures, web platform tools, and geospatial data representation, particularly favoring choropleth maps and extended charts. Visualization techniques were predominantly utilized for real-time data presentation, trend analysis, and predictions. Evaluation methods, categorized into standard methodology, user experience, task efficiency, and accuracy, were observed. Although various open-access datasets were available, only a few were commonly used, mainly those related to COVID-19. This study sheds light on the current trends in visualizing epidemiological data over the past 24 years, highlighting the gaps in standardized evaluation methodologies and the limited exploration of individual epidemiological data and diseases acquired in hospitals during epidemics.

    ]]>
    Visualization of Spatial–Temporal Epidemiological Data: A Scoping Review Denisse Kim Bernardo Cánovas-Segura Manuel Campos Jose M. Juarez doi: 10.3390/technologies12030031 Technologies 2024-02-28 Technologies 2024-02-28 12 3
    Review
    31 10.3390/technologies12030031 https://www.mdpi.com/2227-7080/12/3/31
    Technologies, Vol. 12, Pages 30: Mapping Acoustic Frictional Properties of Self-Lubricating Epoxy-Coated Bearing Steel with Acoustic Emissions during Friction Test https://www.mdpi.com/2227-7080/12/3/30 This work investigates the stick–slip phenomenon during sliding motion between solid lubricant-impregnated epoxy polymer-coated steel bars and AISI 52,100 steel balls. An acoustic sensor detected the stick–slip phenomenon during the tribo-pair interaction. The wear characteristics of the workpiece coated with different epoxy coatings were observed and scrutinized. The RMS values of the acoustic sensor were correlated with the frictional coefficient to develop a standard based on the acoustic sensor, leading to the detection of the stick–slip phenomenon. As per the findings, the acoustic waveform remained relatively similar to the friction coefficient observed during the study and can be used effectively in detecting the stick–slip phenomenon between steel and polymer interaction. This work will be highly beneficial in industrial and automotive applications with a significant interaction of polymer and steel surfaces. 2024-02-24 Technologies, Vol. 12, Pages 30: Mapping Acoustic Frictional Properties of Self-Lubricating Epoxy-Coated Bearing Steel with Acoustic Emissions during Friction Test

    Technologies doi: 10.3390/technologies12030030

    Authors: Venkatasubramanian Krishnamoorthy Ashvita Anitha John Shubrajit Bhaumik Viorel Paleu

    This work investigates the stick–slip phenomenon during sliding motion between solid lubricant-impregnated epoxy polymer-coated steel bars and AISI 52,100 steel balls. An acoustic sensor detected the stick–slip phenomenon during the tribo-pair interaction. The wear characteristics of the workpiece coated with different epoxy coatings were observed and scrutinized. The RMS values of the acoustic sensor were correlated with the frictional coefficient to develop a standard based on the acoustic sensor, leading to the detection of the stick–slip phenomenon. As per the findings, the acoustic waveform remained relatively similar to the friction coefficient observed during the study and can be used effectively in detecting the stick–slip phenomenon between steel and polymer interaction. This work will be highly beneficial in industrial and automotive applications with a significant interaction of polymer and steel surfaces.

    ]]>
    Mapping Acoustic Frictional Properties of Self-Lubricating Epoxy-Coated Bearing Steel with Acoustic Emissions during Friction Test Venkatasubramanian Krishnamoorthy Ashvita Anitha John Shubrajit Bhaumik Viorel Paleu doi: 10.3390/technologies12030030 Technologies 2024-02-24 Technologies 2024-02-24 12 3
    Article
    30 10.3390/technologies12030030 https://www.mdpi.com/2227-7080/12/3/30
    Technologies, Vol. 12, Pages 29: A Kinetic Study of a Photo-Oxidation Reaction between α-Terpinene and Singlet Oxygen in a Novel Oscillatory Baffled Photo Reactor https://www.mdpi.com/2227-7080/12/3/29 By planting LEDs on the surfaces of orifice baffles, a novel batch oscillatory baffled photoreactor (OBPR) together with polymer-supported Rose Bengal (Ps-RB) beads are here used to investigate the reaction kinetics of a photo-oxidation reaction between α-terpinene and singlet oxygen (1O2). In the mode of NMR data analysis that is widely used for this reaction, α-terpinene and ascaridole are treated as a reaction pair, assuming kinetically singlet oxygen is in excess or constant. We have, for the first time, here examined the validity of the method, discovered that increasing α-terpinene initially leads to an increase in ascaridole, indicating that the supply of singlet oxygen is in excess. Applying a kinetic analysis, a pseudo-first-order reaction kinetics is confirmed, supporting this assumption. We have subsequently initiated a methodology of estimating the 1O2 concentrations based on the proportionality of ascaridole concentrations with respect to its maximum under these conditions. With the help of the estimated singlet oxygen data, the efficiency of 1O2 utilization and the photo efficiency of converting molecular oxygen to 1O2 are further proposed and evaluated. We have also identified conditions under which a further increase in α-terpinene has caused decreases in ascaridole, implying kinetically that 1O2 has now become a limiting reagent, and the method of treating α-terpinene and ascaridole as a reaction pair in the data analysis would no longer be valid under those conditions. 2024-02-21 Technologies, Vol. 12, Pages 29: A Kinetic Study of a Photo-Oxidation Reaction between α-Terpinene and Singlet Oxygen in a Novel Oscillatory Baffled Photo Reactor

    Technologies doi: 10.3390/technologies12030029

    Authors: Jianhan Chen Rohen Prinsloo Xiongwei Ni

    By planting LEDs on the surfaces of orifice baffles, a novel batch oscillatory baffled photoreactor (OBPR) together with polymer-supported Rose Bengal (Ps-RB) beads are here used to investigate the reaction kinetics of a photo-oxidation reaction between α-terpinene and singlet oxygen (1O2). In the mode of NMR data analysis that is widely used for this reaction, α-terpinene and ascaridole are treated as a reaction pair, assuming kinetically singlet oxygen is in excess or constant. We have, for the first time, here examined the validity of the method, discovered that increasing α-terpinene initially leads to an increase in ascaridole, indicating that the supply of singlet oxygen is in excess. Applying a kinetic analysis, a pseudo-first-order reaction kinetics is confirmed, supporting this assumption. We have subsequently initiated a methodology of estimating the 1O2 concentrations based on the proportionality of ascaridole concentrations with respect to its maximum under these conditions. With the help of the estimated singlet oxygen data, the efficiency of 1O2 utilization and the photo efficiency of converting molecular oxygen to 1O2 are further proposed and evaluated. We have also identified conditions under which a further increase in α-terpinene has caused decreases in ascaridole, implying kinetically that 1O2 has now become a limiting reagent, and the method of treating α-terpinene and ascaridole as a reaction pair in the data analysis would no longer be valid under those conditions.

    ]]>
    A Kinetic Study of a Photo-Oxidation Reaction between α-Terpinene and Singlet Oxygen in a Novel Oscillatory Baffled Photo Reactor Jianhan Chen Rohen Prinsloo Xiongwei Ni doi: 10.3390/technologies12030029 Technologies 2024-02-21 Technologies 2024-02-21 12 3
    Article
    29 10.3390/technologies12030029 https://www.mdpi.com/2227-7080/12/3/29
    Technologies, Vol. 12, Pages 28: Nested Contrastive Boundary Learning: Point Transformer Self-Attention Regularization for 3D Intracranial Aneurysm Segmentation https://www.mdpi.com/2227-7080/12/3/28 In 3D segmentation, point-based models excel but face difficulties in precise class delineation at class intersections, an inherent challenge in segmentation models. This is particularly critical in medical applications, influencing patient care and surgical planning, where accurate 3D boundary identification is essential for assisting surgery and enhancing medical training through advanced simulations. This study introduces the Nested Contrastive Boundary Learning Point Transformer (NCBL-PT), specially designed for 3D point cloud segmentation. NCBL-PT employs contrastive learning to improve boundary point representation by enhancing feature similarity within the same class. NCBL-PT incorporates a border-aware distinction within the same class points, allowing the model to distinctly learn from both points in proximity to the class intersection and from those beyond. This reduces semantic confusion among the points of different classes in the ambiguous class intersection zone, where similarity in features due to proximity could lead to incorrect associations. The model operates within subsampled point clouds at each encoder block stage of the point transformer architecture. It applies self-attention with k = 16 nearest neighbors to local neighborhoods, aligning with NCBL calculations for consistent self-attention regularization in local contexts. NCBL-PT improves 3D segmentation at class intersections, as evidenced by a 3.31% increase in Intersection over Union (IOU) for aneurysm segmentation compared to the base point transformer model. 2024-02-21 Technologies, Vol. 12, Pages 28: Nested Contrastive Boundary Learning: Point Transformer Self-Attention Regularization for 3D Intracranial Aneurysm Segmentation

    Technologies doi: 10.3390/technologies12030028

    Authors: Luis Felipe Estrella-Ibarra Alejandro de León-Cuevas Saul Tovar-Arriaga

    In 3D segmentation, point-based models excel but face difficulties in precise class delineation at class intersections, an inherent challenge in segmentation models. This is particularly critical in medical applications, influencing patient care and surgical planning, where accurate 3D boundary identification is essential for assisting surgery and enhancing medical training through advanced simulations. This study introduces the Nested Contrastive Boundary Learning Point Transformer (NCBL-PT), specially designed for 3D point cloud segmentation. NCBL-PT employs contrastive learning to improve boundary point representation by enhancing feature similarity within the same class. NCBL-PT incorporates a border-aware distinction within the same class points, allowing the model to distinctly learn from both points in proximity to the class intersection and from those beyond. This reduces semantic confusion among the points of different classes in the ambiguous class intersection zone, where similarity in features due to proximity could lead to incorrect associations. The model operates within subsampled point clouds at each encoder block stage of the point transformer architecture. It applies self-attention with k = 16 nearest neighbors to local neighborhoods, aligning with NCBL calculations for consistent self-attention regularization in local contexts. NCBL-PT improves 3D segmentation at class intersections, as evidenced by a 3.31% increase in Intersection over Union (IOU) for aneurysm segmentation compared to the base point transformer model.

    ]]>
    Nested Contrastive Boundary Learning: Point Transformer Self-Attention Regularization for 3D Intracranial Aneurysm Segmentation Luis Felipe Estrella-Ibarra Alejandro de León-Cuevas Saul Tovar-Arriaga doi: 10.3390/technologies12030028 Technologies 2024-02-21 Technologies 2024-02-21 12 3
    Article
    28 10.3390/technologies12030028 https://www.mdpi.com/2227-7080/12/3/28
    Technologies, Vol. 12, Pages 27: ARSIP: Automated Robotic System for Industrial Painting https://www.mdpi.com/2227-7080/12/2/27 This manuscript addresses the critical need for precise paint application to ensure product durability and aesthetics. While manual work carries risks, robotic systems promise accuracy, yet programming diverse product trajectories remains a challenge. This study aims to develop an autonomous system capable of generating paint trajectories based on object geometries for user-defined spraying processes. By emphasizing energy efficiency, process time, and coating thickness on complex surfaces, a hybrid optimization technique enhances overall efficiency. Extensive hardware and software development results in a robust robotic system leveraging the Robot Operating System (ROS). Integrating a low-cost 3D scanner, calibrator, and trajectory optimizer creates an autonomous painting system. Hardware components, including sensors, motors, and actuators, are seamlessly integrated with a Python and ROS-based software framework, enabling the desired automation. A web-based GUI, powered by JavaScript, allows user control over two robots, facilitating trajectory dispatch, 3D scanning, and optimization. Specific nodes manage calibration, validation, process settings, and real-time video feeds. The use of open-source software and an ROS ecosystem makes it a good choice for industrial-scale implementation. The results indicate that the proposed system can achieve the desired automation, contingent upon surface geometries, spraying processes, and robot dynamics. 2024-02-19 Technologies, Vol. 12, Pages 27: ARSIP: Automated Robotic System for Industrial Painting

    Technologies doi: 10.3390/technologies12020027

    Authors: Hossam A. Gabbar Muhammad Idrees

    This manuscript addresses the critical need for precise paint application to ensure product durability and aesthetics. While manual work carries risks, robotic systems promise accuracy, yet programming diverse product trajectories remains a challenge. This study aims to develop an autonomous system capable of generating paint trajectories based on object geometries for user-defined spraying processes. By emphasizing energy efficiency, process time, and coating thickness on complex surfaces, a hybrid optimization technique enhances overall efficiency. Extensive hardware and software development results in a robust robotic system leveraging the Robot Operating System (ROS). Integrating a low-cost 3D scanner, calibrator, and trajectory optimizer creates an autonomous painting system. Hardware components, including sensors, motors, and actuators, are seamlessly integrated with a Python and ROS-based software framework, enabling the desired automation. A web-based GUI, powered by JavaScript, allows user control over two robots, facilitating trajectory dispatch, 3D scanning, and optimization. Specific nodes manage calibration, validation, process settings, and real-time video feeds. The use of open-source software and an ROS ecosystem makes it a good choice for industrial-scale implementation. The results indicate that the proposed system can achieve the desired automation, contingent upon surface geometries, spraying processes, and robot dynamics.

    ]]>
    ARSIP: Automated Robotic System for Industrial Painting Hossam A. Gabbar Muhammad Idrees doi: 10.3390/technologies12020027 Technologies 2024-02-19 Technologies 2024-02-19 12 2
    Communication
    27 10.3390/technologies12020027 https://www.mdpi.com/2227-7080/12/2/27
    Technologies, Vol. 12, Pages 26: A National Innovation System Concept-Based Analysis of Autonomous Vehicles’ Potential in Reaching Zero-Emission Fleets https://www.mdpi.com/2227-7080/12/2/26 Change management for technology adoption in the transportation sector is often used to address long-term challenges characterized by complexity, uncertainty, and ambiguity. Especially when technology is still evolving, an analysis of these challenges can help explore different alternative future pathways. Therefore, the analysis of development trajectories, correlations between key system variables, and the rate of change within the entire road transportation system can guide action toward sustainability. By adopting the National Innovation System concept, we evaluated the possibilities of an autonomous vehicle option to reach a zero-emission fleet. A case-specific analysis was conducted to evaluate the industry capacities, performance of R&D organizations, main objectives of future market-oriented reforms in the power sector, policy implications, and other aspects to gain insightful perspectives. Environmental insights for transportation sector scenarios in 2021, 2030, and 2050 were explored and analyzed using the COPERT v5.5.1 software program. This study offers a new perspective for road transport decarbonization research and adds new insights to the obtained correlation between the NIS dynamics and achievement of sustainability goals. In 2050, it is expected to achieve 100% carbon neutrality in the PC segment and ~85% in the HDV segment. Finally, four broad conclusions emerged from this research as a consequence of the analysis. 2024-02-08 Technologies, Vol. 12, Pages 26: A National Innovation System Concept-Based Analysis of Autonomous Vehicles’ Potential in Reaching Zero-Emission Fleets

    Technologies doi: 10.3390/technologies12020026

    Authors: Nalina Hamsaiyni Venkatesh Laurencas Raslavičius

    Change management for technology adoption in the transportation sector is often used to address long-term challenges characterized by complexity, uncertainty, and ambiguity. Especially when technology is still evolving, an analysis of these challenges can help explore different alternative future pathways. Therefore, the analysis of development trajectories, correlations between key system variables, and the rate of change within the entire road transportation system can guide action toward sustainability. By adopting the National Innovation System concept, we evaluated the possibilities of an autonomous vehicle option to reach a zero-emission fleet. A case-specific analysis was conducted to evaluate the industry capacities, performance of R&D organizations, main objectives of future market-oriented reforms in the power sector, policy implications, and other aspects to gain insightful perspectives. Environmental insights for transportation sector scenarios in 2021, 2030, and 2050 were explored and analyzed using the COPERT v5.5.1 software program. This study offers a new perspective for road transport decarbonization research and adds new insights to the obtained correlation between the NIS dynamics and achievement of sustainability goals. In 2050, it is expected to achieve 100% carbon neutrality in the PC segment and ~85% in the HDV segment. Finally, four broad conclusions emerged from this research as a consequence of the analysis.

    ]]>
    A National Innovation System Concept-Based Analysis of Autonomous Vehicles’ Potential in Reaching Zero-Emission Fleets Nalina Hamsaiyni Venkatesh Laurencas Raslavičius doi: 10.3390/technologies12020026 Technologies 2024-02-08 Technologies 2024-02-08 12 2
    Article
    26 10.3390/technologies12020026 https://www.mdpi.com/2227-7080/12/2/26
    Technologies, Vol. 12, Pages 25: Exploiting PlanetScope Imagery for Volcanic Deposits Mapping https://www.mdpi.com/2227-7080/12/2/25 During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based on processing PlanetScope imagery. We estimate the mean reflectance values of the visible (RGB) and near infrared (NIR) bands, analyzing pre- and post-eruptive data in specific areas and introducing a new index, which we call the ‘Tephra Fallout Index (TFI)’. We use the Google Earth Engine computing platform and define a threshold for the TFI of different eruptive events to distinguish the areas affected by the tephra fallout and quantify the surface coverage density. We apply our technique to the eruptive events occurring in 2021 at Mt. Etna (Italy), which mainly involved the eastern flank of the volcano, sometimes two or three times within a day, making field surveys difficult. Whenever possible, we compare our results with field data and find an optimal match. This work could have important implications for the identification and quantification of short-term volcanic hazard assessments in near real-time during a volcanic eruption, but also for the mapping of other hazardous events worldwide. 2024-02-08 Technologies, Vol. 12, Pages 25: Exploiting PlanetScope Imagery for Volcanic Deposits Mapping

    Technologies doi: 10.3390/technologies12020025

    Authors: Maddalena Dozzo Gaetana Ganci Federico Lucchi Simona Scollo

    During explosive eruptions, tephra fallout represents one of the main volcanic hazards and can be extremely dangerous for air traffic, infrastructures, and human health. Here, we present a new technique aimed at identifying the area covered by tephra after an explosive event, based on processing PlanetScope imagery. We estimate the mean reflectance values of the visible (RGB) and near infrared (NIR) bands, analyzing pre- and post-eruptive data in specific areas and introducing a new index, which we call the ‘Tephra Fallout Index (TFI)’. We use the Google Earth Engine computing platform and define a threshold for the TFI of different eruptive events to distinguish the areas affected by the tephra fallout and quantify the surface coverage density. We apply our technique to the eruptive events occurring in 2021 at Mt. Etna (Italy), which mainly involved the eastern flank of the volcano, sometimes two or three times within a day, making field surveys difficult. Whenever possible, we compare our results with field data and find an optimal match. This work could have important implications for the identification and quantification of short-term volcanic hazard assessments in near real-time during a volcanic eruption, but also for the mapping of other hazardous events worldwide.

    ]]>
    Exploiting PlanetScope Imagery for Volcanic Deposits Mapping Maddalena Dozzo Gaetana Ganci Federico Lucchi Simona Scollo doi: 10.3390/technologies12020025 Technologies 2024-02-08 Technologies 2024-02-08 12 2
    Communication
    25 10.3390/technologies12020025 https://www.mdpi.com/2227-7080/12/2/25
    Technologies, Vol. 12, Pages 24: High Affinity of Nanoparticles and Matrices Based on Acid-Base Interaction for Nanoparticle-Filled Membrane https://www.mdpi.com/2227-7080/12/2/24 The introduction of nanoparticles into the polymer matrix is a useful technique for creating highly functional composite membranes. Our research focuses on the development of nanoparticle-filled proton exchange membranes (PEMs). PEMs play a crucial role in efficiently controlling the electrical energy conversion process by facilitating the movement of specific ions. This is achieved by creating functionalized nanoparticles with polymer coatings on their surfaces, which are then combined with resins to create proton-conducting membranes. In this study, we prepared PEMs by coating the surfaces of silica nanoparticles with acidic polymers and integrating them into a basic matrix. This process resulted in the formation of a direct bond between the nanoparticles and the matrix, leading to composite membranes with a high dispersion and densely packed nanoparticles. This fabrication technique significantly improved mechanical strength and retention stability, resulting in high-performance membranes. Moreover, the proton conductivity of these membranes showed a remarkable enhancement of more than two orders of magnitude compared to the pristine basic matrix, reaching 4.2 × 10−4 S/cm at 80 °C and 95% relative humidity. 2024-02-07 Technologies, Vol. 12, Pages 24: High Affinity of Nanoparticles and Matrices Based on Acid-Base Interaction for Nanoparticle-Filled Membrane

    Technologies doi: 10.3390/technologies12020024

    Authors: Tsutomu Makino Keisuke Tabata Takaaki Saito Yosimasa Matsuo Akito Masuhara

    The introduction of nanoparticles into the polymer matrix is a useful technique for creating highly functional composite membranes. Our research focuses on the development of nanoparticle-filled proton exchange membranes (PEMs). PEMs play a crucial role in efficiently controlling the electrical energy conversion process by facilitating the movement of specific ions. This is achieved by creating functionalized nanoparticles with polymer coatings on their surfaces, which are then combined with resins to create proton-conducting membranes. In this study, we prepared PEMs by coating the surfaces of silica nanoparticles with acidic polymers and integrating them into a basic matrix. This process resulted in the formation of a direct bond between the nanoparticles and the matrix, leading to composite membranes with a high dispersion and densely packed nanoparticles. This fabrication technique significantly improved mechanical strength and retention stability, resulting in high-performance membranes. Moreover, the proton conductivity of these membranes showed a remarkable enhancement of more than two orders of magnitude compared to the pristine basic matrix, reaching 4.2 × 10−4 S/cm at 80 °C and 95% relative humidity.

    ]]>
    High Affinity of Nanoparticles and Matrices Based on Acid-Base Interaction for Nanoparticle-Filled Membrane Tsutomu Makino Keisuke Tabata Takaaki Saito Yosimasa Matsuo Akito Masuhara doi: 10.3390/technologies12020024 Technologies 2024-02-07 Technologies 2024-02-07 12 2
    Communication
    24 10.3390/technologies12020024 https://www.mdpi.com/2227-7080/12/2/24
    Technologies, Vol. 12, Pages 23: Multistage Malware Detection Method for Backup Systems https://www.mdpi.com/2227-7080/12/2/23 This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of advanced similarity search techniques, powered by the Faiss model, in strengthening malware discovery within system backups and network traffic. Implementing these techniques will lead to more resilient cybersecurity practices, protecting essential systems from hidden malware threats. This paper’s findings underscore the potential of advanced similarity search techniques to enhance malware discovery in system backups and network traffic, and the implications of implementing these techniques include more resilient cybersecurity practices and protecting essential systems from malicious threats hidden within backup archives and network data. The integration of AI methods improves the system’s efficiency and speed, making the proposed system more practical for real-world cybersecurity. This paper’s contribution is a novel and comprehensive solution designed to detect latent malware in backups, preventing the backup of compromised systems. The system comprises multiple analytical components, including a system file change detector, an agent to monitor network traffic, and a firewall, all integrated into a central decision-making unit. The current progress of the research and future steps are discussed, highlighting the contributions of this project and potential enhancements to improve cybersecurity practices. 2024-02-05 Technologies, Vol. 12, Pages 23: Multistage Malware Detection Method for Backup Systems

    Technologies doi: 10.3390/technologies12020023

    Authors: Pavel Novak Vaclav Oujezsky Patrik Kaura Tomas Horvath Martin Holik

    This paper proposes an innovative solution to address the challenge of detecting latent malware in backup systems. The proposed detection system utilizes a multifaceted approach that combines similarity analysis with machine learning algorithms to improve malware detection. The results demonstrate the potential of advanced similarity search techniques, powered by the Faiss model, in strengthening malware discovery within system backups and network traffic. Implementing these techniques will lead to more resilient cybersecurity practices, protecting essential systems from hidden malware threats. This paper’s findings underscore the potential of advanced similarity search techniques to enhance malware discovery in system backups and network traffic, and the implications of implementing these techniques include more resilient cybersecurity practices and protecting essential systems from malicious threats hidden within backup archives and network data. The integration of AI methods improves the system’s efficiency and speed, making the proposed system more practical for real-world cybersecurity. This paper’s contribution is a novel and comprehensive solution designed to detect latent malware in backups, preventing the backup of compromised systems. The system comprises multiple analytical components, including a system file change detector, an agent to monitor network traffic, and a firewall, all integrated into a central decision-making unit. The current progress of the research and future steps are discussed, highlighting the contributions of this project and potential enhancements to improve cybersecurity practices.

    ]]>
    Multistage Malware Detection Method for Backup Systems Pavel Novak Vaclav Oujezsky Patrik Kaura Tomas Horvath Martin Holik doi: 10.3390/technologies12020023 Technologies 2024-02-05 Technologies 2024-02-05 12 2
    Article
    23 10.3390/technologies12020023 https://www.mdpi.com/2227-7080/12/2/23
    Technologies, Vol. 12, Pages 22: Angle Calculus-Based Thrust Force Determination on the Blades of a 10 kW Wind Turbine https://www.mdpi.com/2227-7080/12/2/22 In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different elements that make up the wind turbine because the thrust forces are related to the location point and the blade rotation. For this reason, the thrust force generated in each of the three blades of a low-power wind turbine was analyzed. The angular position (θ) of each blade varied from 0° to 120°, the blades were segmented (r), and different wind speeds were tested, such as cutting, design, average, and maximum. The results demonstrate that the thrust force increases proportionally with increasing wind speed and height, but it behaves differently on each blade segment and each angular position. This method determines the angular position and the exact blade segment where the smallest and the most considerable thrust force occurred. Blade 1, positioned at an angular position of 90°, is the blade most affected by the thrust force on P15. When the blade rotates 180°, the thrust force decreases by 9.09 N; this represents a 66.74% decrease. In addition, this study allows the designers to know the blade deflection caused by the thrust force. This information can be used to avoid collision with the tower. The thrust forces caused blade deflections of 10% to 13% concerning the rotor radius used in this study. These results guarantee the operation of the tested generator under their working conditions. 2024-02-05 Technologies, Vol. 12, Pages 22: Angle Calculus-Based Thrust Force Determination on the Blades of a 10 kW Wind Turbine

    Technologies doi: 10.3390/technologies12020022

    Authors: José Rafael Dorrego-Portela Adriana Eneida Ponce-Martínez Eduardo Pérez-Chaltell Jaime Peña-Antonio Carlos Alberto Mateos-Mendoza José Billerman Robles-Ocampo Perla Yazmin Sevilla-Camacho Marcos Aviles Juvenal Rodríguez-Reséndiz

    In this article, the behavior of the thrust force on the blades of a 10 kW wind turbine was obtained by considering the characteristic wind speed of the Isthmus of Tehuantepec. Analyzing mechanical forces is essential to efficiently and safely design the different elements that make up the wind turbine because the thrust forces are related to the location point and the blade rotation. For this reason, the thrust force generated in each of the three blades of a low-power wind turbine was analyzed. The angular position (θ) of each blade varied from 0° to 120°, the blades were segmented (r), and different wind speeds were tested, such as cutting, design, average, and maximum. The results demonstrate that the thrust force increases proportionally with increasing wind speed and height, but it behaves differently on each blade segment and each angular position. This method determines the angular position and the exact blade segment where the smallest and the most considerable thrust force occurred. Blade 1, positioned at an angular position of 90°, is the blade most affected by the thrust force on P15. When the blade rotates 180°, the thrust force decreases by 9.09 N; this represents a 66.74% decrease. In addition, this study allows the designers to know the blade deflection caused by the thrust force. This information can be used to avoid collision with the tower. The thrust forces caused blade deflections of 10% to 13% concerning the rotor radius used in this study. These results guarantee the operation of the tested generator under their working conditions.

    ]]>
    Angle Calculus-Based Thrust Force Determination on the Blades of a 10 kW Wind Turbine José Rafael Dorrego-Portela Adriana Eneida Ponce-Martínez Eduardo Pérez-Chaltell Jaime Peña-Antonio Carlos Alberto Mateos-Mendoza José Billerman Robles-Ocampo Perla Yazmin Sevilla-Camacho Marcos Aviles Juvenal Rodríguez-Reséndiz doi: 10.3390/technologies12020022 Technologies 2024-02-05 Technologies 2024-02-05 12 2
    Article
    22 10.3390/technologies12020022 https://www.mdpi.com/2227-7080/12/2/22
    Technologies, Vol. 12, Pages 21: Energy Efficiency in Additive Manufacturing: Condensed Review https://www.mdpi.com/2227-7080/12/2/21 Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the cost of production, fabricate light weight and complex parts in a short period of time, and respond to the manufacturing needs of customers. It is clear that the AM technologies consume energy to complete the production tasks of each part. Therefore, it is imperative to know the impact of energy efficiency in order to economically and properly use these advancing technologies. This paper provides a holistic review of this important concept from the perspectives of process, materials science, industry, and initiatives. The goal of this research study is to collect and present the latest knowledge blocks related to the energy consumption of AM technologies from a number of recent technical resources. Overall, they are the collection of surveys, observations, experimentations, case studies, content analyses, and archival research studies. The study highlights the current trends and technologies associated with energy efficiency and their influence on the AM community. 2024-02-05 Technologies, Vol. 12, Pages 21: Energy Efficiency in Additive Manufacturing: Condensed Review

    Technologies doi: 10.3390/technologies12020021

    Authors: Ismail Fidan Vivekanand Naikwadi Suhas Alkunte Roshan Mishra Khalid Tantawi

    Today, it is significant that the use of additive manufacturing (AM) has growing in almost every aspect of the daily life. A high number of sectors are adapting and implementing this revolutionary production technology in their domain to increase production volumes, reduce the cost of production, fabricate light weight and complex parts in a short period of time, and respond to the manufacturing needs of customers. It is clear that the AM technologies consume energy to complete the production tasks of each part. Therefore, it is imperative to know the impact of energy efficiency in order to economically and properly use these advancing technologies. This paper provides a holistic review of this important concept from the perspectives of process, materials science, industry, and initiatives. The goal of this research study is to collect and present the latest knowledge blocks related to the energy consumption of AM technologies from a number of recent technical resources. Overall, they are the collection of surveys, observations, experimentations, case studies, content analyses, and archival research studies. The study highlights the current trends and technologies associated with energy efficiency and their influence on the AM community.

    ]]>
    Energy Efficiency in Additive Manufacturing: Condensed Review Ismail Fidan Vivekanand Naikwadi Suhas Alkunte Roshan Mishra Khalid Tantawi doi: 10.3390/technologies12020021 Technologies 2024-02-05 Technologies 2024-02-05 12 2
    Review
    21 10.3390/technologies12020021 https://www.mdpi.com/2227-7080/12/2/21
    Technologies, Vol. 12, Pages 20: Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation https://www.mdpi.com/2227-7080/12/2/20 When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones. 2024-02-02 Technologies, Vol. 12, Pages 20: Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation

    Technologies doi: 10.3390/technologies12020020

    Authors: Sergio Torregrosa David Muñoz Vincent Herbert Francisco Chinesta

    When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.

    ]]>
    Parametric Metamodeling Based on Optimal Transport Applied to Uncertainty Evaluation Sergio Torregrosa David Muñoz Vincent Herbert Francisco Chinesta doi: 10.3390/technologies12020020 Technologies 2024-02-02 Technologies 2024-02-02 12 2
    Article
    20 10.3390/technologies12020020 https://www.mdpi.com/2227-7080/12/2/20
    Technologies, Vol. 12, Pages 19: An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence https://www.mdpi.com/2227-7080/12/2/19 Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error. 2024-02-01 Technologies, Vol. 12, Pages 19: An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence

    Technologies doi: 10.3390/technologies12020019

    Authors: Asmaa Hamdy Rabie Ahmed I. Saleh Said H. Abd Elkhalik Ali E. Takieldeen

    Recently, the application of Artificial Intelligence (AI) in many areas of life has allowed raising the efficiency of systems and converting them into smart ones, especially in the field of energy. Integrating AI with power systems allows electrical grids to be smart enough to predict the future load, which is known as Intelligent Load Forecasting (ILF). Hence, suitable decisions for power system planning and operation procedures can be taken accordingly. Moreover, ILF can play a vital role in electrical demand response, which guarantees a reliable transitioning of power systems. This paper introduces an Optimum Load Forecasting Strategy (OLFS) for predicting future load in smart electrical grids based on AI techniques. The proposed OLFS consists of two sequential phases, which are: Data Preprocessing Phase (DPP) and Load Forecasting Phase (LFP). In the former phase, an input electrical load dataset is prepared before the actual forecasting takes place through two essential tasks, namely feature selection and outlier rejection. Feature selection is carried out using Advanced Leopard Seal Optimization (ALSO) as a new nature-inspired optimization technique, while outlier rejection is accomplished through the Interquartile Range (IQR) as a measure of statistical dispersion. On the other hand, actual load forecasting takes place in LFP using a new predictor called the Weighted K-Nearest Neighbor (WKNN) algorithm. The proposed OLFS has been tested through extensive experiments. Results have shown that OLFS outperforms recent load forecasting techniques as it introduces the maximum prediction accuracy with the minimum root mean square error.

    ]]>
    An Optimum Load Forecasting Strategy (OLFS) for Smart Grids Based on Artificial Intelligence Asmaa Hamdy Rabie Ahmed I. Saleh Said H. Abd Elkhalik Ali E. Takieldeen doi: 10.3390/technologies12020019 Technologies 2024-02-01 Technologies 2024-02-01 12 2
    Article
    19 10.3390/technologies12020019 https://www.mdpi.com/2227-7080/12/2/19
    Technologies, Vol. 12, Pages 18: A Comprehensive Performance Analysis of a 48-Watt Transformerless DC-DC Boost Converter Using a Proportional–Integral–Derivative Controller with Special Attention to Inductor Design and Components Reliability https://www.mdpi.com/2227-7080/12/2/18 In this research paper, a comprehensive performance analysis was carried out for a 48-watt transformerless DC-DC boost converter using a Proportional–Integral–Derivative (PID) controller through dynamic modeling. In a boost converter, the optimal design of the magnetic element plays an important role in efficient energy transfer. This research paper emphasizes the design of an inductor using the Area Product Technique (APT) to analyze factors such as area product, window area, number of turns, and wire size. Observations were made by examining its response to changes in load current, supply voltage, and load resistance at frequency levels of 100 and 500 kHz. Moreover, this paper extended its investigation by analyzing the failure rates and reliability of active and passive components in a 48-watt boost converter, providing valuable insights about failure behavior and reliability. Frequency domain analysis was conducted to assess the controller’s stability and robustness. The results conclusively underscore the benefits of incorporating the designed PID controller in terms of achieving the desired regulation and rapid response to disturbances at 100 and 500 kHz. The findings emphasize the outstanding reliability of the inductor, evident from the significantly low failure rates in comparison to other circuit components. Conversely, the research also reveals the inherent vulnerability of the switching device (MOSFET), characterized by a higher failure rate and lower reliability. The MATLAB® Simulink platform was utilized to investigate the results. 2024-01-30 Technologies, Vol. 12, Pages 18: A Comprehensive Performance Analysis of a 48-Watt Transformerless DC-DC Boost Converter Using a Proportional–Integral–Derivative Controller with Special Attention to Inductor Design and Components Reliability

    Technologies doi: 10.3390/technologies12020018

    Authors: Kuldeep Jayaswal D. K. Palwalia Josep M. Guerrero

    In this research paper, a comprehensive performance analysis was carried out for a 48-watt transformerless DC-DC boost converter using a Proportional–Integral–Derivative (PID) controller through dynamic modeling. In a boost converter, the optimal design of the magnetic element plays an important role in efficient energy transfer. This research paper emphasizes the design of an inductor using the Area Product Technique (APT) to analyze factors such as area product, window area, number of turns, and wire size. Observations were made by examining its response to changes in load current, supply voltage, and load resistance at frequency levels of 100 and 500 kHz. Moreover, this paper extended its investigation by analyzing the failure rates and reliability of active and passive components in a 48-watt boost converter, providing valuable insights about failure behavior and reliability. Frequency domain analysis was conducted to assess the controller’s stability and robustness. The results conclusively underscore the benefits of incorporating the designed PID controller in terms of achieving the desired regulation and rapid response to disturbances at 100 and 500 kHz. The findings emphasize the outstanding reliability of the inductor, evident from the significantly low failure rates in comparison to other circuit components. Conversely, the research also reveals the inherent vulnerability of the switching device (MOSFET), characterized by a higher failure rate and lower reliability. The MATLAB® Simulink platform was utilized to investigate the results.

    ]]>
    A Comprehensive Performance Analysis of a 48-Watt Transformerless DC-DC Boost Converter Using a Proportional–Integral–Derivative Controller with Special Attention to Inductor Design and Components Reliability Kuldeep Jayaswal D. K. Palwalia Josep M. Guerrero doi: 10.3390/technologies12020018 Technologies 2024-01-30 Technologies 2024-01-30 12 2
    Article
    18 10.3390/technologies12020018 https://www.mdpi.com/2227-7080/12/2/18
    Technologies, Vol. 12, Pages 17: Comprehensive Study of Compression and Texture Integration for Digital Imaging and Communications in Medicine Data Analysis https://www.mdpi.com/2227-7080/12/2/17 In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications in Medicine (DICOM) files. In evaluating four predominant image compression algorithms, i.e., discrete cosine transform (DCT), discrete wavelet transform (DWT), the fractal compression algorithm (FCA), and the vector quantization algorithm (VQA), this study focuses on their ability to compress data while preserving essential texture features such as contrast, correlation, angular second moment (ASM), and inverse difference moment (IDM). A pivotal observation concerns the direction-independent Grey Level Co-occurrence Matrix (GLCM) in DICOM analysis, which reveals intriguing variations between two intermediate scans measured with texture characteristics. Performance-wise, the DCT, DWT, FCA, and VQA algorithms achieved minimum compression ratios (CRs) of 27.87, 37.91, 33.26, and 27.39, respectively, with maximum CRs at 34.48, 68.96, 60.60, and 38.74. This study also undertook a statistical analysis of distinct CT chest scans from COVID-19 patients, highlighting evolving texture patterns. Finally, this work underscores the potential of coupling image compression and texture feature quantification for monitoring changes related to human chest conditions, offering a promising avenue for efficient storage and diagnostic assessment of critical medical imaging. 2024-01-24 Technologies, Vol. 12, Pages 17: Comprehensive Study of Compression and Texture Integration for Digital Imaging and Communications in Medicine Data Analysis

    Technologies doi: 10.3390/technologies12020017

    Authors: Amit Kumar Shakya Anurag Vidyarthi

    In response to the COVID-19 pandemic and its strain on healthcare resources, this study presents a comprehensive review of various techniques that can be used to integrate image compression techniques and statistical texture analysis to optimize the storage of Digital Imaging and Communications in Medicine (DICOM) files. In evaluating four predominant image compression algorithms, i.e., discrete cosine transform (DCT), discrete wavelet transform (DWT), the fractal compression algorithm (FCA), and the vector quantization algorithm (VQA), this study focuses on their ability to compress data while preserving essential texture features such as contrast, correlation, angular second moment (ASM), and inverse difference moment (IDM). A pivotal observation concerns the direction-independent Grey Level Co-occurrence Matrix (GLCM) in DICOM analysis, which reveals intriguing variations between two intermediate scans measured with texture characteristics. Performance-wise, the DCT, DWT, FCA, and VQA algorithms achieved minimum compression ratios (CRs) of 27.87, 37.91, 33.26, and 27.39, respectively, with maximum CRs at 34.48, 68.96, 60.60, and 38.74. This study also undertook a statistical analysis of distinct CT chest scans from COVID-19 patients, highlighting evolving texture patterns. Finally, this work underscores the potential of coupling image compression and texture feature quantification for monitoring changes related to human chest conditions, offering a promising avenue for efficient storage and diagnostic assessment of critical medical imaging.

    ]]>
    Comprehensive Study of Compression and Texture Integration for Digital Imaging and Communications in Medicine Data Analysis Amit Kumar Shakya Anurag Vidyarthi doi: 10.3390/technologies12020017 Technologies 2024-01-24 Technologies 2024-01-24 12 2
    Review
    17 10.3390/technologies12020017 https://www.mdpi.com/2227-7080/12/2/17
    Technologies, Vol. 12, Pages 16: Attention-Based Ensemble Network for Effective Breast Cancer Classification over Benchmarks https://www.mdpi.com/2227-7080/12/2/16 Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain experts to select an optimal feature, obtains a limited accuracy, and has a high false positive rate due to handcrafting features extraction. The deep learning model overcomes these limitations, but these models require large amounts of training data and computation resources, and further improvement in the model performance is needed. To do this, we employ a novel framework called the Ensemble-based Channel and Spatial Attention Network (ECS-A-Net) to automatically classify infected regions within BC images. The proposed framework consists of two phases: in the first phase, we apply different augmentation techniques to enhance the size of the input data, while the second phase includes an ensemble technique that parallelly leverages modified SE-ResNet50 and InceptionV3 as a backbone for feature extraction, followed by Channel Attention (CA) and Spatial Attention (SA) modules in a series manner for more dominant feature selection. To further validate the ECS-A-Net, we conducted extensive experiments between several competitive state-of-the-art (SOTA) techniques over two benchmarks, including DDSM and MIAS, where the proposed model achieved 96.50% accuracy for the DDSM and 95.33% accuracy for the MIAS datasets. Additionally, the experimental results demonstrated that our network achieved a better performance using various evaluation indicators, including accuracy, sensitivity, and specificity among other methods. 2024-01-23 Technologies, Vol. 12, Pages 16: Attention-Based Ensemble Network for Effective Breast Cancer Classification over Benchmarks

    Technologies doi: 10.3390/technologies12020016

    Authors: Su Myat Thwin Sharaf J. Malebary Anas W. Abulfaraj Hyun-Seok Park

    Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain experts to select an optimal feature, obtains a limited accuracy, and has a high false positive rate due to handcrafting features extraction. The deep learning model overcomes these limitations, but these models require large amounts of training data and computation resources, and further improvement in the model performance is needed. To do this, we employ a novel framework called the Ensemble-based Channel and Spatial Attention Network (ECS-A-Net) to automatically classify infected regions within BC images. The proposed framework consists of two phases: in the first phase, we apply different augmentation techniques to enhance the size of the input data, while the second phase includes an ensemble technique that parallelly leverages modified SE-ResNet50 and InceptionV3 as a backbone for feature extraction, followed by Channel Attention (CA) and Spatial Attention (SA) modules in a series manner for more dominant feature selection. To further validate the ECS-A-Net, we conducted extensive experiments between several competitive state-of-the-art (SOTA) techniques over two benchmarks, including DDSM and MIAS, where the proposed model achieved 96.50% accuracy for the DDSM and 95.33% accuracy for the MIAS datasets. Additionally, the experimental results demonstrated that our network achieved a better performance using various evaluation indicators, including accuracy, sensitivity, and specificity among other methods.

    ]]>
    Attention-Based Ensemble Network for Effective Breast Cancer Classification over Benchmarks Su Myat Thwin Sharaf J. Malebary Anas W. Abulfaraj Hyun-Seok Park doi: 10.3390/technologies12020016 Technologies 2024-01-23 Technologies 2024-01-23 12 2
    Article
    16 10.3390/technologies12020016 https://www.mdpi.com/2227-7080/12/2/16
    Technologies, Vol. 12, Pages 15: A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision https://www.mdpi.com/2227-7080/12/2/15 Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications. 2024-01-23 Technologies, Vol. 12, Pages 15: A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision

    Technologies doi: 10.3390/technologies12020015

    Authors: Nikoleta Manakitsa George S. Maraslidis Lazaros Moysis George F. Fragulis

    Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications.

    ]]>
    A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision Nikoleta Manakitsa George S. Maraslidis Lazaros Moysis George F. Fragulis doi: 10.3390/technologies12020015 Technologies 2024-01-23 Technologies 2024-01-23 12 2
    Review
    15 10.3390/technologies12020015 https://www.mdpi.com/2227-7080/12/2/15
    Technologies, Vol. 12, Pages 14: Comparison of Shallow (−20 °C) and Deep Cryogenic Treatment (−196 °C) to Enhance the Properties of a Mg/2wt.%CeO2 Nanocomposite https://www.mdpi.com/2227-7080/12/2/14 Magnesium and its composites have been used in various applications owing to their high specific strength properties and low density. However, the application is limited to room-temperature conditions owing to the lack of research available on the ability of magnesium alloys to perform in sub-zero conditions. The present study attempted, for the first time, the effects of two cryogenic temperatures (−20 °C/253 K and −196 °C/77 K) on the physical, thermal, and mechanical properties of a Mg/2wt.%CeO2 nanocomposite. The materials were synthesized using the disintegrated melt deposition method followed by hot extrusion. The results revealed that the shallow cryogenically treated (refrigerated at −20 °C) samples display a reduction in porosity, lower ignition resistance, similar microhardness, compressive yield, and ultimate strength and failure strain when compared to deep cryogenically treated samples in liquid nitrogen at −196 °C. Although deep cryogenically treated samples showed an overall edge, the extent of the increase in properties may not be justified, as samples exposed at −20 °C display very similar mechanical properties, thus reducing the overall cost of the cryogenic process. The results were compared with the data available in the open literature, and the mechanisms behind the improvement of the properties were evaluated. 2024-01-23 Technologies, Vol. 12, Pages 14: Comparison of Shallow (−20 °C) and Deep Cryogenic Treatment (−196 °C) to Enhance the Properties of a Mg/2wt.%CeO2 Nanocomposite

    Technologies doi: 10.3390/technologies12020014

    Authors: Shwetabh Gupta Gururaj Parande Manoj Gupta

    Magnesium and its composites have been used in various applications owing to their high specific strength properties and low density. However, the application is limited to room-temperature conditions owing to the lack of research available on the ability of magnesium alloys to perform in sub-zero conditions. The present study attempted, for the first time, the effects of two cryogenic temperatures (−20 °C/253 K and −196 °C/77 K) on the physical, thermal, and mechanical properties of a Mg/2wt.%CeO2 nanocomposite. The materials were synthesized using the disintegrated melt deposition method followed by hot extrusion. The results revealed that the shallow cryogenically treated (refrigerated at −20 °C) samples display a reduction in porosity, lower ignition resistance, similar microhardness, compressive yield, and ultimate strength and failure strain when compared to deep cryogenically treated samples in liquid nitrogen at −196 °C. Although deep cryogenically treated samples showed an overall edge, the extent of the increase in properties may not be justified, as samples exposed at −20 °C display very similar mechanical properties, thus reducing the overall cost of the cryogenic process. The results were compared with the data available in the open literature, and the mechanisms behind the improvement of the properties were evaluated.

    ]]>
    Comparison of Shallow (−20 °C) and Deep Cryogenic Treatment (−196 °C) to Enhance the Properties of a Mg/2wt.%CeO2 Nanocomposite Shwetabh Gupta Gururaj Parande Manoj Gupta doi: 10.3390/technologies12020014 Technologies 2024-01-23 Technologies 2024-01-23 12 2
    Communication
    14 10.3390/technologies12020014 https://www.mdpi.com/2227-7080/12/2/14
    Technologies, Vol. 12, Pages 13: Machine Learning Approaches to Predict Major Adverse Cardiovascular Events in Atrial Fibrillation https://www.mdpi.com/2227-7080/12/2/13 The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence and machine learning techniques (ML) offer a promising avenue for more effective AF evolution prediction. Five ML models were developed to obtain predictors of MACE in AF patients. Two-thirds of the data were used for training, employing diverse approaches and optimizing to minimize prediction errors, while the remaining third was reserved for testing and validation. AdaBoost emerged as the top-performing model (accuracy: 0.9999; recall: 1; F1 score: 0.9997). Noteworthy features influencing predictions included the Charlson Comorbidity Index (CCI), diabetes mellitus, cancer, the Wells scale, and CHA2DS2-VASc, with specific associations identified. Elevated MACE risk was observed, with a CCI score exceeding 2.67 ± 1.31 (p < 0.001), CHA2DS2-VASc score of 4.62 ± 1.02 (p < 0.001), and an intermediate-risk Wells scale classification. Overall, the AdaBoost ML offers an alternative predictive approach to facilitate the early identification of MACE risk in the assessment of patients with AF. 2024-01-23 Technologies, Vol. 12, Pages 13: Machine Learning Approaches to Predict Major Adverse Cardiovascular Events in Atrial Fibrillation

    Technologies doi: 10.3390/technologies12020013

    Authors: Pedro Moltó-Balado Silvia Reverté-Villarroya Victor Alonso-Barberán Cinta Monclús-Arasa Maria Teresa Balado-Albiol Josep Clua-Queralt Josep-Lluis Clua-Espuny

    The increasing prevalence of atrial fibrillation (AF) and its association with Major Adverse Cardiovascular Events (MACE) presents challenges in early identification and treatment. Although existing risk factors, biomarkers, genetic variants, and imaging parameters predict MACE, emerging factors may be more decisive. Artificial intelligence and machine learning techniques (ML) offer a promising avenue for more effective AF evolution prediction. Five ML models were developed to obtain predictors of MACE in AF patients. Two-thirds of the data were used for training, employing diverse approaches and optimizing to minimize prediction errors, while the remaining third was reserved for testing and validation. AdaBoost emerged as the top-performing model (accuracy: 0.9999; recall: 1; F1 score: 0.9997). Noteworthy features influencing predictions included the Charlson Comorbidity Index (CCI), diabetes mellitus, cancer, the Wells scale, and CHA2DS2-VASc, with specific associations identified. Elevated MACE risk was observed, with a CCI score exceeding 2.67 ± 1.31 (p < 0.001), CHA2DS2-VASc score of 4.62 ± 1.02 (p < 0.001), and an intermediate-risk Wells scale classification. Overall, the AdaBoost ML offers an alternative predictive approach to facilitate the early identification of MACE risk in the assessment of patients with AF.

    ]]>
    Machine Learning Approaches to Predict Major Adverse Cardiovascular Events in Atrial Fibrillation Pedro Moltó-Balado Silvia Reverté-Villarroya Victor Alonso-Barberán Cinta Monclús-Arasa Maria Teresa Balado-Albiol Josep Clua-Queralt Josep-Lluis Clua-Espuny doi: 10.3390/technologies12020013 Technologies 2024-01-23 Technologies 2024-01-23 12 2
    Article
    13 10.3390/technologies12020013 https://www.mdpi.com/2227-7080/12/2/13
    Technologies, Vol. 12, Pages 12: Multi-Arm Trajectory Planning for Optimal Collision-Free Pick-and-Place Operations https://www.mdpi.com/2227-7080/12/1/12 This article addresses the problem of automating a multi-arm pick-and-place robotic system. The objective is to optimize the execution time of a task simultaneously performed by multiple robots, sharing the same workspace, and determining the order of operations to be performed. Due to its ability to address decision-making problems of all kinds, the system is modeled under the mathematical framework of the Markov Decision Process (MDP). In this particular work, the model is adjusted to a deterministic, single-agent, and fully observable system, which allows for its comparison with other resolution methods such as graph search algorithms and Planning Domain Definition Language (PDDL). The proposed approach provides three advantages: it plans the trajectory to perform the task in minimum time; it considers how to avoid collisions between robots; and it automatically generates the robot code for any robot manufacturer and any initial objects’ positions in the workspace. The result meets the objectives and is a fast and robust system that can be safely employed in a production line. 2024-01-22 Technologies, Vol. 12, Pages 12: Multi-Arm Trajectory Planning for Optimal Collision-Free Pick-and-Place Operations

    Technologies doi: 10.3390/technologies12010012

    Authors: Daniel Mateu-Gomez Francisco José Martínez-Peral Carlos Perez-Vidal

    This article addresses the problem of automating a multi-arm pick-and-place robotic system. The objective is to optimize the execution time of a task simultaneously performed by multiple robots, sharing the same workspace, and determining the order of operations to be performed. Due to its ability to address decision-making problems of all kinds, the system is modeled under the mathematical framework of the Markov Decision Process (MDP). In this particular work, the model is adjusted to a deterministic, single-agent, and fully observable system, which allows for its comparison with other resolution methods such as graph search algorithms and Planning Domain Definition Language (PDDL). The proposed approach provides three advantages: it plans the trajectory to perform the task in minimum time; it considers how to avoid collisions between robots; and it automatically generates the robot code for any robot manufacturer and any initial objects’ positions in the workspace. The result meets the objectives and is a fast and robust system that can be safely employed in a production line.

    ]]>
    Multi-Arm Trajectory Planning for Optimal Collision-Free Pick-and-Place Operations Daniel Mateu-Gomez Francisco José Martínez-Peral Carlos Perez-Vidal doi: 10.3390/technologies12010012 Technologies 2024-01-22 Technologies 2024-01-22 12 1
    Article
    12 10.3390/technologies12010012 https://www.mdpi.com/2227-7080/12/1/12
    Technologies, Vol. 12, Pages 11: Drone Forensics: An Innovative Approach to the Forensic Investigation of Drone Accidents Based on Digital Twin Technology https://www.mdpi.com/2227-7080/12/1/11 In recent years, drones have become increasingly popular tools in criminal investigations, either as means of committing crimes or as tools to assist in investigations due to their capability to gather evidence and conduct surveillance, which has been effective. However, the increasing use of drones has also brought about new difficulties in the field of digital forensic investigation. This paper aims to contribute to the growing body of research on digital forensic investigations of drone accidents by proposing an innovative approach based on the use of digital twin technology to investigate drone accidents. The simulation is implemented as part of the digital twin solution using Robot Operating System (ROS version 2) and simulated environments such as Gazebo and Rviz, demonstrating the potential of this technology to improve investigation accuracy and efficiency. This research work can contribute to the development of new and innovative investigation techniques. 2024-01-19 Technologies, Vol. 12, Pages 11: Drone Forensics: An Innovative Approach to the Forensic Investigation of Drone Accidents Based on Digital Twin Technology

    Technologies doi: 10.3390/technologies12010011

    Authors: Asma Almusayli Tanveer Zia Emad-ul-Haq Qazi

    In recent years, drones have become increasingly popular tools in criminal investigations, either as means of committing crimes or as tools to assist in investigations due to their capability to gather evidence and conduct surveillance, which has been effective. However, the increasing use of drones has also brought about new difficulties in the field of digital forensic investigation. This paper aims to contribute to the growing body of research on digital forensic investigations of drone accidents by proposing an innovative approach based on the use of digital twin technology to investigate drone accidents. The simulation is implemented as part of the digital twin solution using Robot Operating System (ROS version 2) and simulated environments such as Gazebo and Rviz, demonstrating the potential of this technology to improve investigation accuracy and efficiency. This research work can contribute to the development of new and innovative investigation techniques.

    ]]>
    Drone Forensics: An Innovative Approach to the Forensic Investigation of Drone Accidents Based on Digital Twin Technology Asma Almusayli Tanveer Zia Emad-ul-Haq Qazi doi: 10.3390/technologies12010011 Technologies 2024-01-19 Technologies 2024-01-19 12 1
    Article
    11 10.3390/technologies12010011 https://www.mdpi.com/2227-7080/12/1/11
    Technologies, Vol. 12, Pages 10: A Miniaturized Antenna for Millimeter-Wave 5G-II Band Communication https://www.mdpi.com/2227-7080/12/1/10 This article introduces a miniaturized antenna for 5G-II band millimeter-wave communication. The antenna’s performance is meticulously examined through comprehensive simulations carried out using CST Microwave Studio, employing an FR-4 substrate with dimensions measuring 12 × 14 × 1.6 mm3. The proposed design exhibits exceptional qualities, featuring an impressive impedance bandwidth of 70.4% and a remarkable return loss of −35 dBi. The operational frequency range of this antenna extends from 16.2 GHz to 33.8 GHz, featuring a central frequency of 25 GHz, positioning it effectively within the 5G-II Band. The antenna consistently maintains polar patterns throughout this spectrum, which guarantees dependable and efficient performance. It showcases a substantial gain of 3.85 dBi and an impressive efficiency rating of 82.9%. Renowned for its versatility, this antenna is well suited for a diverse range of applications, including but not limited to Ka band, Ku band, 5G-II bands, and various other purposes in microwaves. 2024-01-18 Technologies, Vol. 12, Pages 10: A Miniaturized Antenna for Millimeter-Wave 5G-II Band Communication

    Technologies doi: 10.3390/technologies12010010

    Authors: Manish Varun Yadav Chandru Kumar R Swati Varun Yadav Tanweer Ali Jaume Anguera

    This article introduces a miniaturized antenna for 5G-II band millimeter-wave communication. The antenna’s performance is meticulously examined through comprehensive simulations carried out using CST Microwave Studio, employing an FR-4 substrate with dimensions measuring 12 × 14 × 1.6 mm3. The proposed design exhibits exceptional qualities, featuring an impressive impedance bandwidth of 70.4% and a remarkable return loss of −35 dBi. The operational frequency range of this antenna extends from 16.2 GHz to 33.8 GHz, featuring a central frequency of 25 GHz, positioning it effectively within the 5G-II Band. The antenna consistently maintains polar patterns throughout this spectrum, which guarantees dependable and efficient performance. It showcases a substantial gain of 3.85 dBi and an impressive efficiency rating of 82.9%. Renowned for its versatility, this antenna is well suited for a diverse range of applications, including but not limited to Ka band, Ku band, 5G-II bands, and various other purposes in microwaves.

    ]]>
    A Miniaturized Antenna for Millimeter-Wave 5G-II Band Communication Manish Varun Yadav Chandru Kumar R Swati Varun Yadav Tanweer Ali Jaume Anguera doi: 10.3390/technologies12010010 Technologies 2024-01-18 Technologies 2024-01-18 12 1
    Article
    10 10.3390/technologies12010010 https://www.mdpi.com/2227-7080/12/1/10
    Technologies, Vol. 12, Pages 9: A Mixed Reality Design System for Interior Renovation: Inpainting with 360-Degree Live Streaming and Generative Adversarial Networks after Removal https://www.mdpi.com/2227-7080/12/1/9 In contemporary society, “Indoor Generation” is becoming increasingly prevalent, and spending long periods of time indoors affects well-being. Therefore, it is essential to research biophilic indoor environments and their impact on occupants. When it comes to existing building stocks, which hold significant social, economic, and environmental value, renovation should be considered before new construction. Providing swift feedback in the early stages of renovation can help stakeholders achieve consensus. Additionally, understanding proposed plans can greatly enhance the design of indoor environments. This paper presents a real-time system for architectural designers and stakeholders that integrates mixed reality (MR), diminished reality (DR), and generative adversarial networks (GANs). The system enables the generation of interior renovation drawings based on user preferences and designer styles via GANs. The system’s seamless integration of MR, DR, and GANs provides a unique and innovative approach to interior renovation design. MR and DR technologies then transform these 2D drawings into immersive experiences that help stakeholders evaluate and understand renovation proposals. In addition, we assess the quality of GAN-generated images using full-reference image quality assessment (FR-IQA) methods. The evaluation results indicate that most images demonstrate moderate quality. Almost all objects in the GAN-generated images can be identified by their names and purposes without any ambiguity or confusion. This demonstrates the system’s effectiveness in producing viable renovation visualizations. This research emphasizes the system’s role in enhancing feedback efficiency during renovation design, enabling stakeholders to fully evaluate and understand proposed renovations. 2024-01-11 Technologies, Vol. 12, Pages 9: A Mixed Reality Design System for Interior Renovation: Inpainting with 360-Degree Live Streaming and Generative Adversarial Networks after Removal

    Technologies doi: 10.3390/technologies12010009

    Authors: Yuehan Zhu Tomohiro Fukuda Nobuyoshi Yabuki

    In contemporary society, “Indoor Generation” is becoming increasingly prevalent, and spending long periods of time indoors affects well-being. Therefore, it is essential to research biophilic indoor environments and their impact on occupants. When it comes to existing building stocks, which hold significant social, economic, and environmental value, renovation should be considered before new construction. Providing swift feedback in the early stages of renovation can help stakeholders achieve consensus. Additionally, understanding proposed plans can greatly enhance the design of indoor environments. This paper presents a real-time system for architectural designers and stakeholders that integrates mixed reality (MR), diminished reality (DR), and generative adversarial networks (GANs). The system enables the generation of interior renovation drawings based on user preferences and designer styles via GANs. The system’s seamless integration of MR, DR, and GANs provides a unique and innovative approach to interior renovation design. MR and DR technologies then transform these 2D drawings into immersive experiences that help stakeholders evaluate and understand renovation proposals. In addition, we assess the quality of GAN-generated images using full-reference image quality assessment (FR-IQA) methods. The evaluation results indicate that most images demonstrate moderate quality. Almost all objects in the GAN-generated images can be identified by their names and purposes without any ambiguity or confusion. This demonstrates the system’s effectiveness in producing viable renovation visualizations. This research emphasizes the system’s role in enhancing feedback efficiency during renovation design, enabling stakeholders to fully evaluate and understand proposed renovations.

    ]]>
    A Mixed Reality Design System for Interior Renovation: Inpainting with 360-Degree Live Streaming and Generative Adversarial Networks after Removal Yuehan Zhu Tomohiro Fukuda Nobuyoshi Yabuki doi: 10.3390/technologies12010009 Technologies 2024-01-11 Technologies 2024-01-11 12 1
    Article
    9 10.3390/technologies12010009 https://www.mdpi.com/2227-7080/12/1/9
    Technologies, Vol. 12, Pages 8: Editorial for the Special Issue “Data Science and Big Data in Biology, Physical Science and Engineering” https://www.mdpi.com/2227-7080/12/1/8 Big Data analysis is one of the most contemporary areas of development and research in the present day [...] 2024-01-08 Technologies, Vol. 12, Pages 8: Editorial for the Special Issue “Data Science and Big Data in Biology, Physical Science and Engineering”

    Technologies doi: 10.3390/technologies12010008

    Authors: Mohammed Mahmoud

    Big Data analysis is one of the most contemporary areas of development and research in the present day [...]

    ]]>
    Editorial for the Special Issue “Data Science and Big Data in Biology, Physical Science and Engineering” Mohammed Mahmoud doi: 10.3390/technologies12010008 Technologies 2024-01-08 Technologies 2024-01-08 12 1
    Editorial
    8 10.3390/technologies12010008 https://www.mdpi.com/2227-7080/12/1/8
    Technologies, Vol. 12, Pages 7: Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach https://www.mdpi.com/2227-7080/12/1/7 People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities. 2024-01-05 Technologies, Vol. 12, Pages 7: Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach

    Technologies doi: 10.3390/technologies12010007

    Authors: Jaime-Rodrigo González-Rodríguez Diana-Margarita Córdova-Esparza Juan Terven Julio-Alejandro Romero-González

    People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities.

    ]]>
    Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach Jaime-Rodrigo González-Rodríguez Diana-Margarita Córdova-Esparza Juan Terven Julio-Alejandro Romero-González doi: 10.3390/technologies12010007 Technologies 2024-01-05 Technologies 2024-01-05 12 1
    Article
    7 10.3390/technologies12010007 https://www.mdpi.com/2227-7080/12/1/7
    Technologies, Vol. 12, Pages 6: Extended-Window Algorithms for Model Prediction Applied to Hybrid Power Systems https://www.mdpi.com/2227-7080/12/1/6 This paper proposes extended-window algorithms for model prediction and applies them to optimize hybrid power systems. We consider a hybrid power system comprising solar panels, batteries, a fuel cell, and a chemical hydrogen generation system. The proposed algorithms enable the periodic updating of prediction models and corresponding changes in system parts and power management based on the accumulated data. We first develop a hybrid power model to evaluate system responses under different conditions. We then build prediction models using five artificial intelligence algorithms. Among them, the light gradient boosting machine and extreme gradient boosting methods achieve the highest accuracies for predicting solar radiation and load responses, respectively. Therefore, we apply these two models to forecast solar and load responses. Third, we introduce extended-window algorithms and investigate the effects of window sizes and replacement costs on system performance. The results show that the optimal window size is one week, and the system cost is 13.57% lower than the cost of the system that does not use the extended-window algorithms. The proposed method also tends to make fewer component replacements when the replacement cost increases. Finally, we design experiments to demonstrate the feasibility and effectiveness of systems using extended-window model prediction. 2024-01-05 Technologies, Vol. 12, Pages 6: Extended-Window Algorithms for Model Prediction Applied to Hybrid Power Systems

    Technologies doi: 10.3390/technologies12010006

    Authors: Fu-Cheng Wang Hsiao-Tzu Huang

    This paper proposes extended-window algorithms for model prediction and applies them to optimize hybrid power systems. We consider a hybrid power system comprising solar panels, batteries, a fuel cell, and a chemical hydrogen generation system. The proposed algorithms enable the periodic updating of prediction models and corresponding changes in system parts and power management based on the accumulated data. We first develop a hybrid power model to evaluate system responses under different conditions. We then build prediction models using five artificial intelligence algorithms. Among them, the light gradient boosting machine and extreme gradient boosting methods achieve the highest accuracies for predicting solar radiation and load responses, respectively. Therefore, we apply these two models to forecast solar and load responses. Third, we introduce extended-window algorithms and investigate the effects of window sizes and replacement costs on system performance. The results show that the optimal window size is one week, and the system cost is 13.57% lower than the cost of the system that does not use the extended-window algorithms. The proposed method also tends to make fewer component replacements when the replacement cost increases. Finally, we design experiments to demonstrate the feasibility and effectiveness of systems using extended-window model prediction.

    ]]>
    Extended-Window Algorithms for Model Prediction Applied to Hybrid Power Systems Fu-Cheng Wang Hsiao-Tzu Huang doi: 10.3390/technologies12010006 Technologies 2024-01-05 Technologies 2024-01-05 12 1
    Article
    6 10.3390/technologies12010006 https://www.mdpi.com/2227-7080/12/1/6
    Technologies, Vol. 12, Pages 5: Revisiting Probabilistic Latent Semantic Analysis: Extensions, Challenges and Insights https://www.mdpi.com/2227-7080/12/1/5 This manuscript provides a comprehensive exploration of Probabilistic latent semantic analysis (PLSA), highlighting its strengths, drawbacks, and challenges. The PLSA, originally a tool for information retrieval, provides a probabilistic sense for a table of co-occurrences as a mixture of multinomial distributions spanned over a latent class variable and adjusted with the expectation–maximization algorithm. The distributional assumptions and the iterative nature lead to a rigid model, dividing enthusiasts and detractors. Those drawbacks have led to several reformulations: the extension of the method to normal data distributions and a non-parametric formulation obtained with the help of Non-negative matrix factorization (NMF) techniques. Furthermore, the combination of theoretical studies and programming techniques alleviates the computational problem, thus making the potential of the method explicit: its relation with the Singular value decomposition (SVD), which means that PLSA can be used to satisfactorily support other techniques, such as the construction of Fisher kernels, the probabilistic interpretation of Principal component analysis (PCA), Transfer learning (TL), and the training of neural networks, among others. We also present open questions as a practical and theoretical research window. 2024-01-03 Technologies, Vol. 12, Pages 5: Revisiting Probabilistic Latent Semantic Analysis: Extensions, Challenges and Insights

    Technologies doi: 10.3390/technologies12010005

    Authors: Pau Figuera Pablo García Bringas

    This manuscript provides a comprehensive exploration of Probabilistic latent semantic analysis (PLSA), highlighting its strengths, drawbacks, and challenges. The PLSA, originally a tool for information retrieval, provides a probabilistic sense for a table of co-occurrences as a mixture of multinomial distributions spanned over a latent class variable and adjusted with the expectation–maximization algorithm. The distributional assumptions and the iterative nature lead to a rigid model, dividing enthusiasts and detractors. Those drawbacks have led to several reformulations: the extension of the method to normal data distributions and a non-parametric formulation obtained with the help of Non-negative matrix factorization (NMF) techniques. Furthermore, the combination of theoretical studies and programming techniques alleviates the computational problem, thus making the potential of the method explicit: its relation with the Singular value decomposition (SVD), which means that PLSA can be used to satisfactorily support other techniques, such as the construction of Fisher kernels, the probabilistic interpretation of Principal component analysis (PCA), Transfer learning (TL), and the training of neural networks, among others. We also present open questions as a practical and theoretical research window.

    ]]>
    Revisiting Probabilistic Latent Semantic Analysis: Extensions, Challenges and Insights Pau Figuera Pablo García Bringas doi: 10.3390/technologies12010005 Technologies 2024-01-03 Technologies 2024-01-03 12 1
    Review
    5 10.3390/technologies12010005 https://www.mdpi.com/2227-7080/12/1/5
    Technologies, Vol. 12, Pages 4: A Novel Machine Learning-Based Prediction Method for Early Detection and Diagnosis of Congenital Heart Disease Using ECG Signal Processing https://www.mdpi.com/2227-7080/12/1/4 Congenital heart disease (CHD) represents a multifaceted medical condition that requires early detection and diagnosis for effective management, given its diverse presentations and subtle symptoms that manifest from birth. This research article introduces a groundbreaking healthcare application, the Machine Learning-based Congenital Heart Disease Prediction Method (ML-CHDPM), tailored to address these challenges and expedite the timely identification and classification of CHD in pregnant women. The ML-CHDPM model leverages state-of-the-art machine learning techniques to categorize CHD cases, taking into account pertinent clinical and demographic factors. Trained on a comprehensive dataset, the model captures intricate patterns and relationships, resulting in precise predictions and classifications. The evaluation of the model’s performance encompasses sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve. Remarkably, the findings underscore the ML-CHDPM’s superiority across six pivotal metrics: accuracy, precision, recall, specificity, false positive rate (FPR), and false negative rate (FNR). The method achieves an average accuracy rate of 94.28%, precision of 87.54%, recall rate of 96.25%, specificity rate of 91.74%, FPR of 8.26%, and FNR of 3.75%. These outcomes distinctly demonstrate the ML-CHDPM’s effectiveness in reliably predicting and classifying CHD cases. This research marks a significant stride toward early detection and diagnosis, harnessing advanced machine learning techniques within the realm of ECG signal processing, specifically tailored to pregnant women. 2024-01-02 Technologies, Vol. 12, Pages 4: A Novel Machine Learning-Based Prediction Method for Early Detection and Diagnosis of Congenital Heart Disease Using ECG Signal Processing

    Technologies doi: 10.3390/technologies12010004

    Authors: Prabu Pachiyannan Musleh Alsulami Deafallah Alsadie Abdul Khader Jilani Saudagar Mohammed AlKhathami Ramesh Chandra Poonia

    Congenital heart disease (CHD) represents a multifaceted medical condition that requires early detection and diagnosis for effective management, given its diverse presentations and subtle symptoms that manifest from birth. This research article introduces a groundbreaking healthcare application, the Machine Learning-based Congenital Heart Disease Prediction Method (ML-CHDPM), tailored to address these challenges and expedite the timely identification and classification of CHD in pregnant women. The ML-CHDPM model leverages state-of-the-art machine learning techniques to categorize CHD cases, taking into account pertinent clinical and demographic factors. Trained on a comprehensive dataset, the model captures intricate patterns and relationships, resulting in precise predictions and classifications. The evaluation of the model’s performance encompasses sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve. Remarkably, the findings underscore the ML-CHDPM’s superiority across six pivotal metrics: accuracy, precision, recall, specificity, false positive rate (FPR), and false negative rate (FNR). The method achieves an average accuracy rate of 94.28%, precision of 87.54%, recall rate of 96.25%, specificity rate of 91.74%, FPR of 8.26%, and FNR of 3.75%. These outcomes distinctly demonstrate the ML-CHDPM’s effectiveness in reliably predicting and classifying CHD cases. This research marks a significant stride toward early detection and diagnosis, harnessing advanced machine learning techniques within the realm of ECG signal processing, specifically tailored to pregnant women.

    ]]>
    A Novel Machine Learning-Based Prediction Method for Early Detection and Diagnosis of Congenital Heart Disease Using ECG Signal Processing Prabu Pachiyannan Musleh Alsulami Deafallah Alsadie Abdul Khader Jilani Saudagar Mohammed AlKhathami Ramesh Chandra Poonia doi: 10.3390/technologies12010004 Technologies 2024-01-02 Technologies 2024-01-02 12 1
    Article
    4 10.3390/technologies12010004 https://www.mdpi.com/2227-7080/12/1/4
    Technologies, Vol. 12, Pages 3: Graph Learning and Deep Neural Network Ensemble for Supporting Cognitive Decline Assessment https://www.mdpi.com/2227-7080/12/1/3 Cognitive decline represents a significant public health concern due to its severe implications on memory and general health. Early detection is crucial to initiate timely interventions and improve patient outcomes. However, traditional diagnosis methods often rely on personal interpretations or biases, may not detect the early stages of cognitive decline, or involve invasive screening procedures; thus, there is a growing interest in developing non-invasive methods benefiting also from the technological advances. Wearable devices and Internet of Things sensors can monitor various aspects of daily life together with health parameters and can provide valuable data regarding people’s behavior. In this paper, we propose a technical solution that can be useful for potentially supporting cognitive decline assessment in early stages, by employing advanced machine learning techniques for detecting higher activity fragmentation based on daily activity monitoring using wearable devices. Our approach also considers data coming from wellbeing assessment questionnaires that can offer other important insights about a monitored person. We use deep neural network models to capture complex, non-linear relationships in the daily activities data and graph learning for the structural wellbeing information in the questionnaire answers. The proposed solution is evaluated in a simulated environment on a large synthetic dataset, the results showing that our approach can offer an alternative as a support for early detection of cognitive decline during patient-assessment processes. 2023-12-24 Technologies, Vol. 12, Pages 3: Graph Learning and Deep Neural Network Ensemble for Supporting Cognitive Decline Assessment

    Technologies doi: 10.3390/technologies12010003

    Authors: Gabriel Antonesi Alexandru Rancea Tudor Cioara Ionut Anghel

    Cognitive decline represents a significant public health concern due to its severe implications on memory and general health. Early detection is crucial to initiate timely interventions and improve patient outcomes. However, traditional diagnosis methods often rely on personal interpretations or biases, may not detect the early stages of cognitive decline, or involve invasive screening procedures; thus, there is a growing interest in developing non-invasive methods benefiting also from the technological advances. Wearable devices and Internet of Things sensors can monitor various aspects of daily life together with health parameters and can provide valuable data regarding people’s behavior. In this paper, we propose a technical solution that can be useful for potentially supporting cognitive decline assessment in early stages, by employing advanced machine learning techniques for detecting higher activity fragmentation based on daily activity monitoring using wearable devices. Our approach also considers data coming from wellbeing assessment questionnaires that can offer other important insights about a monitored person. We use deep neural network models to capture complex, non-linear relationships in the daily activities data and graph learning for the structural wellbeing information in the questionnaire answers. The proposed solution is evaluated in a simulated environment on a large synthetic dataset, the results showing that our approach can offer an alternative as a support for early detection of cognitive decline during patient-assessment processes.

    ]]>
    Graph Learning and Deep Neural Network Ensemble for Supporting Cognitive Decline Assessment Gabriel Antonesi Alexandru Rancea Tudor Cioara Ionut Anghel doi: 10.3390/technologies12010003 Technologies 2023-12-24 Technologies 2023-12-24 12 1
    Article
    3 10.3390/technologies12010003 https://www.mdpi.com/2227-7080/12/1/3
    Technologies, Vol. 12, Pages 2: Transformative Approach for Heart Rate Prediction from Face Videos Using Local and Global Multi-Head Self-Attention https://www.mdpi.com/2227-7080/12/1/2 Heart rate estimation from face videos is an emerging technology that offers numerous potential applications in healthcare and human–computer interaction. However, most of the existing approaches often overlook the importance of long-range spatiotemporal dependencies, which is essential for robust measurement of heart rate prediction. Additionally, they involve extensive pre-processing steps to enhance the prediction accuracy, resulting in high computational complexity. In this paper, we propose an innovative solution called LGTransPPG. This end-to-end transformer-based framework eliminates the need for pre-processing steps while achieving improved efficiency and accuracy. LGTransPPG incorporates local and global aggregation techniques to capture fine-grained facial features and contextual information. By leveraging the power of transformers, our framework can effectively model long-range dependencies and temporal dynamics, enhancing the heart rate prediction process. The proposed approach is evaluated on three publicly available datasets, demonstrating its robustness and generalizability. Furthermore, we achieved a high Pearson correlation coefficient (PCC) value of 0.88, indicating its superior efficiency and accuracy between the predicted and actual heart rate values. 2023-12-22 Technologies, Vol. 12, Pages 2: Transformative Approach for Heart Rate Prediction from Face Videos Using Local and Global Multi-Head Self-Attention

    Technologies doi: 10.3390/technologies12010002

    Authors: Smera Premkumar J. Anitha Daniela Danciulescu D. Jude Hemanth

    Heart rate estimation from face videos is an emerging technology that offers numerous potential applications in healthcare and human–computer interaction. However, most of the existing approaches often overlook the importance of long-range spatiotemporal dependencies, which is essential for robust measurement of heart rate prediction. Additionally, they involve extensive pre-processing steps to enhance the prediction accuracy, resulting in high computational complexity. In this paper, we propose an innovative solution called LGTransPPG. This end-to-end transformer-based framework eliminates the need for pre-processing steps while achieving improved efficiency and accuracy. LGTransPPG incorporates local and global aggregation techniques to capture fine-grained facial features and contextual information. By leveraging the power of transformers, our framework can effectively model long-range dependencies and temporal dynamics, enhancing the heart rate prediction process. The proposed approach is evaluated on three publicly available datasets, demonstrating its robustness and generalizability. Furthermore, we achieved a high Pearson correlation coefficient (PCC) value of 0.88, indicating its superior efficiency and accuracy between the predicted and actual heart rate values.

    ]]>
    Transformative Approach for Heart Rate Prediction from Face Videos Using Local and Global Multi-Head Self-Attention Smera Premkumar J. Anitha Daniela Danciulescu D. Jude Hemanth doi: 10.3390/technologies12010002 Technologies 2023-12-22 Technologies 2023-12-22 12 1
    Article
    2 10.3390/technologies12010002 https://www.mdpi.com/2227-7080/12/1/2
    Technologies, Vol. 12, Pages 1: Information-Analytical Software for Developing Digital Models of Porous Structures’ Materials Using a Cellular Automata Approach https://www.mdpi.com/2227-7080/12/1/1 An information-analytical software has been developed for creating digital models of structures of porous materials. The information-analytical software allows you to select a model that accurately reproduces structures of porous materials—aerogels—creating a digital model by which you can predict their properties. In addition, the software contains models for calculating various properties of aerogels based on their structure, such as pore size distribution and mechanical properties. Models have been implemented that allow the description of various processes in porous structures—hydrodynamics of multicomponent systems, heat and mass transfer processes, dissolution, sorption and desorption. With the models implemented in this software, various digital models for different types of aerogels can be developed. As a comparison parameter, pore size distribution is chosen. Deviation of the calculated pore size distribution curves from the experimental ones does not exceed 15%, which indicates that the obtained digital model corresponds to the experimental sample. The software contains both the existing models that are used for porous structures modeling and the original models that were developed for different studied aerogels and processes, such as the dissolution of active pharmaceutical ingredients and mass transportation in porous media. 2023-12-20 Technologies, Vol. 12, Pages 1: Information-Analytical Software for Developing Digital Models of Porous Structures’ Materials Using a Cellular Automata Approach

    Technologies doi: 10.3390/technologies12010001

    Authors: Igor Lebedev Anastasia Uvarova Natalia Menshutina

    An information-analytical software has been developed for creating digital models of structures of porous materials. The information-analytical software allows you to select a model that accurately reproduces structures of porous materials—aerogels—creating a digital model by which you can predict their properties. In addition, the software contains models for calculating various properties of aerogels based on their structure, such as pore size distribution and mechanical properties. Models have been implemented that allow the description of various processes in porous structures—hydrodynamics of multicomponent systems, heat and mass transfer processes, dissolution, sorption and desorption. With the models implemented in this software, various digital models for different types of aerogels can be developed. As a comparison parameter, pore size distribution is chosen. Deviation of the calculated pore size distribution curves from the experimental ones does not exceed 15%, which indicates that the obtained digital model corresponds to the experimental sample. The software contains both the existing models that are used for porous structures modeling and the original models that were developed for different studied aerogels and processes, such as the dissolution of active pharmaceutical ingredients and mass transportation in porous media.

    ]]>
    Information-Analytical Software for Developing Digital Models of Porous Structures’ Materials Using a Cellular Automata Approach Igor Lebedev Anastasia Uvarova Natalia Menshutina doi: 10.3390/technologies12010001 Technologies 2023-12-20 Technologies 2023-12-20 12 1
    Article
    1 10.3390/technologies12010001 https://www.mdpi.com/2227-7080/12/1/1
    Technologies, Vol. 11, Pages 185: Generating Mathematical Expressions for Estimation of Atomic Coordinates of Carbon Nanotubes Using Genetic Programming Symbolic Regression https://www.mdpi.com/2227-7080/11/6/185 The study addresses the formidable challenge of calculating atomic coordinates for carbon nanotubes (CNTs) using density functional theory (DFT), a process that can endure for days. To tackle this issue, the research leverages the Genetic Programming Symbolic Regression (GPSR) method on a publicly available dataset. The primary aim is to assess if the resulting Mathematical Equations (MEs) from GPSR can accurately estimate calculated atomic coordinates obtained through DFT. Given the numerous hyperparameters in GPSR, a Random Hyperparameter Value Search (RHVS) method is devised to pinpoint the optimal combination of hyperparameter values, maximizing estimation accuracy. Two distinct approaches are considered. The first involves applying GPSR to estimate calculated coordinates (uc, vc, wc) using all input variables (initial atomic coordinates u, v, w, and integers n, m specifying the chiral vector). The second approach applies GPSR to estimate each calculated atomic coordinate using integers n and m alongside the corresponding initial atomic coordinates. This results in the creation of six different dataset variations. The GPSR algorithm undergoes training via a 5-fold cross-validation process. The evaluation metrics include the coefficient of determination (R2), mean absolute error (MAE), root mean squared error (RMSE), and the depth and length of generated MEs. The findings from this approach demonstrate that GPSR can effectively estimate CNT atomic coordinates with high accuracy, as indicated by an impressive R2≈1.0. This study not only contributes to the advancement of accurate estimation techniques for atomic coordinates but also introduces a systematic approach for optimizing hyperparameters in GPSR, showcasing its potential for broader applications in materials science and computational chemistry. 2023-12-18 Technologies, Vol. 11, Pages 185: Generating Mathematical Expressions for Estimation of Atomic Coordinates of Carbon Nanotubes Using Genetic Programming Symbolic Regression

    Technologies doi: 10.3390/technologies11060185

    Authors: Nikola Anđelić Sandi Baressi Šegota

    The study addresses the formidable challenge of calculating atomic coordinates for carbon nanotubes (CNTs) using density functional theory (DFT), a process that can endure for days. To tackle this issue, the research leverages the Genetic Programming Symbolic Regression (GPSR) method on a publicly available dataset. The primary aim is to assess if the resulting Mathematical Equations (MEs) from GPSR can accurately estimate calculated atomic coordinates obtained through DFT. Given the numerous hyperparameters in GPSR, a Random Hyperparameter Value Search (RHVS) method is devised to pinpoint the optimal combination of hyperparameter values, maximizing estimation accuracy. Two distinct approaches are considered. The first involves applying GPSR to estimate calculated coordinates (uc, vc, wc) using all input variables (initial atomic coordinates u, v, w, and integers n, m specifying the chiral vector). The second approach applies GPSR to estimate each calculated atomic coordinate using integers n and m alongside the corresponding initial atomic coordinates. This results in the creation of six different dataset variations. The GPSR algorithm undergoes training via a 5-fold cross-validation process. The evaluation metrics include the coefficient of determination (R2), mean absolute error (MAE), root mean squared error (RMSE), and the depth and length of generated MEs. The findings from this approach demonstrate that GPSR can effectively estimate CNT atomic coordinates with high accuracy, as indicated by an impressive R2≈1.0. This study not only contributes to the advancement of accurate estimation techniques for atomic coordinates but also introduces a systematic approach for optimizing hyperparameters in GPSR, showcasing its potential for broader applications in materials science and computational chemistry.

    ]]>
    Generating Mathematical Expressions for Estimation of Atomic Coordinates of Carbon Nanotubes Using Genetic Programming Symbolic Regression Nikola Anđelić Sandi Baressi Šegota doi: 10.3390/technologies11060185 Technologies 2023-12-18 Technologies 2023-12-18 11 6
    Article
    185 10.3390/technologies11060185 https://www.mdpi.com/2227-7080/11/6/185
    Technologies, Vol. 11, Pages 183: Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis https://www.mdpi.com/2227-7080/11/6/183 The field of computer vision has long grappled with the challenging task of image synthesis, which entails the creation of novel high-fidelity images. This task is underscored by the Generative Learning Trilemma, which posits that it is not possible for any image synthesis model to simultaneously excel at high-quality sampling, achieve mode convergence with diverse sample representation, and perform rapid sampling. In this paper, we explore the potential of Quantum Boltzmann Machines (QBMs) for image synthesis, leveraging the D-Wave 2000Q quantum annealer. We undertake a comprehensive performance assessment of QBMs in comparison to established generative models in the field: Restricted Boltzmann Machines (RBMs), Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DDPMs). Our evaluation is grounded in widely recognized scoring metrics, including the Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Scores. The results of our study indicate that QBMs do not significantly outperform the conventional models in terms of the three evaluative criteria. Moreover, QBMs have not demonstrated the capability to overcome the challenges outlined in the Trilemma of Generative Learning. Through our investigation, we contribute to the understanding of quantum computing’s role in generative learning and identify critical areas for future research to enhance the capabilities of image synthesis models. 2023-12-18 Technologies, Vol. 11, Pages 183: Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis

    Technologies doi: 10.3390/technologies11060183

    Authors: Siddhant Jain Joseph Geraci Harry E. Ruda

    The field of computer vision has long grappled with the challenging task of image synthesis, which entails the creation of novel high-fidelity images. This task is underscored by the Generative Learning Trilemma, which posits that it is not possible for any image synthesis model to simultaneously excel at high-quality sampling, achieve mode convergence with diverse sample representation, and perform rapid sampling. In this paper, we explore the potential of Quantum Boltzmann Machines (QBMs) for image synthesis, leveraging the D-Wave 2000Q quantum annealer. We undertake a comprehensive performance assessment of QBMs in comparison to established generative models in the field: Restricted Boltzmann Machines (RBMs), Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DDPMs). Our evaluation is grounded in widely recognized scoring metrics, including the Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Scores. The results of our study indicate that QBMs do not significantly outperform the conventional models in terms of the three evaluative criteria. Moreover, QBMs have not demonstrated the capability to overcome the challenges outlined in the Trilemma of Generative Learning. Through our investigation, we contribute to the understanding of quantum computing’s role in generative learning and identify critical areas for future research to enhance the capabilities of image synthesis models.

    ]]>
    Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis Siddhant Jain Joseph Geraci Harry E. Ruda doi: 10.3390/technologies11060183 Technologies 2023-12-18 Technologies 2023-12-18 11 6
    Article
    183 10.3390/technologies11060183 https://www.mdpi.com/2227-7080/11/6/183
    Technologies, Vol. 11, Pages 184: The Holby–Morgan Model of Platinum Catalyst Degradation in PEM Fuel Cells: Range of Feasible Parameters Achieved Using Voltage Cycling https://www.mdpi.com/2227-7080/11/6/184 Loss of electrochemical surface area in proton-exchange membrane is of large practical importance, since membrane degradation largely affects the durability and life of fuel cells. In this paper, the electrokinetic model developed by Holby and Morgan is considered. The paper describes degradation mechanisms in membrane catalyst presented by platinum dissolution, platinum diffusion, and platinum oxide formation. A one-dimensional model is governed by nonlinear reaction–diffusion equations given in a cathodic catalyst layer using Butler–Volmer relationships for reaction rates. The governing system is endowed with initial conditions, mixed no-flux boundary condition at the interface with gas diffusion layer, and a perfectly absorbing condition at the membrane boundary. In cyclic voltammetry tests, a non-symmetric square waveform is applied for the electric potential difference between 0.6 and 0.9 V held for 10 and 30 s, respectively, according to the protocol of European Fuel Cell and Hydrogen Joint Undertaking. Aimed at mitigation strategies, the impact of cycling operating conditions and model parameters on the loss rate of active area is investigated. The global behavior with respect to variation of parameters is performed using the method of sensitivity analysis. Finding feasible and unfeasible values helps to determine the range of test parameters employed in the model. Comprehensive results of numerical simulation tests are presented and discussed. 2023-12-18 Technologies, Vol. 11, Pages 184: The Holby–Morgan Model of Platinum Catalyst Degradation in PEM Fuel Cells: Range of Feasible Parameters Achieved Using Voltage Cycling

    Technologies doi: 10.3390/technologies11060184

    Authors: Victor A. Kovtunenko

    Loss of electrochemical surface area in proton-exchange membrane is of large practical importance, since membrane degradation largely affects the durability and life of fuel cells. In this paper, the electrokinetic model developed by Holby and Morgan is considered. The paper describes degradation mechanisms in membrane catalyst presented by platinum dissolution, platinum diffusion, and platinum oxide formation. A one-dimensional model is governed by nonlinear reaction–diffusion equations given in a cathodic catalyst layer using Butler–Volmer relationships for reaction rates. The governing system is endowed with initial conditions, mixed no-flux boundary condition at the interface with gas diffusion layer, and a perfectly absorbing condition at the membrane boundary. In cyclic voltammetry tests, a non-symmetric square waveform is applied for the electric potential difference between 0.6 and 0.9 V held for 10 and 30 s, respectively, according to the protocol of European Fuel Cell and Hydrogen Joint Undertaking. Aimed at mitigation strategies, the impact of cycling operating conditions and model parameters on the loss rate of active area is investigated. The global behavior with respect to variation of parameters is performed using the method of sensitivity analysis. Finding feasible and unfeasible values helps to determine the range of test parameters employed in the model. Comprehensive results of numerical simulation tests are presented and discussed.

    ]]>
    The Holby–Morgan Model of Platinum Catalyst Degradation in PEM Fuel Cells: Range of Feasible Parameters Achieved Using Voltage Cycling Victor A. Kovtunenko doi: 10.3390/technologies11060184 Technologies 2023-12-18 Technologies 2023-12-18 11 6
    Article
    184 10.3390/technologies11060184 https://www.mdpi.com/2227-7080/11/6/184
    -