Modelling Latest open access articles published in Modelling at https://www.mdpi.com/journal/modelling https://www.mdpi.com/journal/modelling MDPI en Creative Commons Attribution (CC-BY) MDPI support@mdpi.com
  • Modelling, Vol. 5, Pages 694-719: Correctness Verification of Mutual Exclusion Algorithms by Model Checking https://www.mdpi.com/2673-3951/5/3/37 Mutual exclusion algorithms are at the heart of concurrent/parallel and distributed systems. It is well known that such algorithms are very difficult to analyze, and in the literature, different conjectures about starvation freedom and the number of by-passes (also called the overtaking factor) exist. The overtaking factor affects the (hopefully) bounded waiting time that a process competing for entering the critical section has to suffer before accessing the shared resource, have been formulated for specific algorithms. This paper proposes a novel modeling approach based on Timed Automata and the Uppaal toolset, which proves effective for studying all the properties of a mutual exclusion algorithm for N≥2 processes, by exhaustive model checking. Although the approach, as already confirmed by similar experiments reported in the literature, is not scalable due to state explosion problems and can be practically applied until N≤5, it is of great value for revealing the true properties of analyzed algorithms. For dimensions N>5, the Statistical Model Checker of Uppaal can be used, which, although based on simulations, can confirm properties by estimations and probabilities. This paper describes the proposed modeling and verification method and applies it to several mutual exclusion algorithms, thus retrieving known properties but also showing new results about properties often studied by informal reasoning. 2024-06-28 Modelling, Vol. 5, Pages 694-719: Correctness Verification of Mutual Exclusion Algorithms by Model Checking

    Modelling doi: 10.3390/modelling5030037

    Authors: Libero Nigro Franco Cicirelli

    Mutual exclusion algorithms are at the heart of concurrent/parallel and distributed systems. It is well known that such algorithms are very difficult to analyze, and in the literature, different conjectures about starvation freedom and the number of by-passes (also called the overtaking factor) exist. The overtaking factor affects the (hopefully) bounded waiting time that a process competing for entering the critical section has to suffer before accessing the shared resource, have been formulated for specific algorithms. This paper proposes a novel modeling approach based on Timed Automata and the Uppaal toolset, which proves effective for studying all the properties of a mutual exclusion algorithm for N≥2 processes, by exhaustive model checking. Although the approach, as already confirmed by similar experiments reported in the literature, is not scalable due to state explosion problems and can be practically applied until N≤5, it is of great value for revealing the true properties of analyzed algorithms. For dimensions N>5, the Statistical Model Checker of Uppaal can be used, which, although based on simulations, can confirm properties by estimations and probabilities. This paper describes the proposed modeling and verification method and applies it to several mutual exclusion algorithms, thus retrieving known properties but also showing new results about properties often studied by informal reasoning.

    ]]>
    Correctness Verification of Mutual Exclusion Algorithms by Model Checking Libero Nigro Franco Cicirelli doi: 10.3390/modelling5030037 Modelling 2024-06-28 Modelling 2024-06-28 5 3
    Article
    694 10.3390/modelling5030037 https://www.mdpi.com/2673-3951/5/3/37
    Modelling, Vol. 5, Pages 673-693: Multi-Criteria Response Surface Optimization of Centrifugal Pump Performance Using CFD for Wastewater Application https://www.mdpi.com/2673-3951/5/3/36 The effective transport of high-viscosity fluids in wastewater treatment systems is heavily contingent upon the operational efficiency of centrifugal pumps. However, challenges arise in operating these pumps under such conditions due to the detrimental impact of viscosity. This study is focused on enhancing the performance of centrifugal pumps by examining the influence of design and impeller configuration. By employing CFD analysis in ANSYS, this study examines the effects of varying inlet and outlet impeller diameters as well as different numbers of impeller blades on pump performance. The investigation entails three core stages: pre-processing, encompassing the creation of geometry, meshing, and study configuration; processing, which involves defining physics settings, selecting the solver type, and specifying boundary conditions; and post-processing, dedicated to the interpretation of results derived from model creation and solution. Leveraging Genetic Aggregation for response surface modelling facilitates the pinpointing of effective design configurations rooted in specific pump performance goals, thereby resulting in noteworthy performance enhancements. Notably, an optimal pump design featuring a 5-blade impeller with inlet and outlet diameters of 55.92 mm and 207.78 mm, respectively, yielded significant improvements of 26.51% in head, 2.53% in static efficiency, and 62.30% in incipient net positive suction head (NPSHi). 2024-06-27 Modelling, Vol. 5, Pages 673-693: Multi-Criteria Response Surface Optimization of Centrifugal Pump Performance Using CFD for Wastewater Application

    Modelling doi: 10.3390/modelling5030036

    Authors: Edwin Pagayona Jaime Honra

    The effective transport of high-viscosity fluids in wastewater treatment systems is heavily contingent upon the operational efficiency of centrifugal pumps. However, challenges arise in operating these pumps under such conditions due to the detrimental impact of viscosity. This study is focused on enhancing the performance of centrifugal pumps by examining the influence of design and impeller configuration. By employing CFD analysis in ANSYS, this study examines the effects of varying inlet and outlet impeller diameters as well as different numbers of impeller blades on pump performance. The investigation entails three core stages: pre-processing, encompassing the creation of geometry, meshing, and study configuration; processing, which involves defining physics settings, selecting the solver type, and specifying boundary conditions; and post-processing, dedicated to the interpretation of results derived from model creation and solution. Leveraging Genetic Aggregation for response surface modelling facilitates the pinpointing of effective design configurations rooted in specific pump performance goals, thereby resulting in noteworthy performance enhancements. Notably, an optimal pump design featuring a 5-blade impeller with inlet and outlet diameters of 55.92 mm and 207.78 mm, respectively, yielded significant improvements of 26.51% in head, 2.53% in static efficiency, and 62.30% in incipient net positive suction head (NPSHi).

    ]]>
    Multi-Criteria Response Surface Optimization of Centrifugal Pump Performance Using CFD for Wastewater Application Edwin Pagayona Jaime Honra doi: 10.3390/modelling5030036 Modelling 2024-06-27 Modelling 2024-06-27 5 3
    Article
    673 10.3390/modelling5030036 https://www.mdpi.com/2673-3951/5/3/36
    Modelling, Vol. 5, Pages 659-672: Impact of Volute Throat Area and Gap Width on the Hydraulic Performance of Low-Specific-Speed Centrifugal Pump https://www.mdpi.com/2673-3951/5/3/35 This paper investigates the influence of the volute geometry on the hydraulic performance of a low-specific-speed centrifugal pump using numerical simulations. The performance characteristics for the pump with the volute geometry designed using the constant velocity method show a significant discrepancy between the design point and the best efficiency point (BEP). This design methodology also results in a relatively flat head–capacity curve. These are both undesirable characteristics which can be mitigated by a reduction in the volute throat area. This design methodology also leads to a reduction in the power consumption and an increase in efficiency, especially at underload and design flow conditions. These impacts of the volute throat area on performance characteristics are investigated in terms of the change in internal flow characteristics due to the reduction in the volute throat area. Another aspect of the study is the impact of the width of the volute gap on performance characteristics. A reduction in the gap width results in a nearly vertical shift of the head–capacity curve, so that head delivered is higher across all the flow rates as the gap width is reduced. This is also accompanied by a slight improvement in efficiency under design flow and overload conditions. Numerical simulations are used to relate the change in performance characteristics with internal flow characteristics. 2024-06-26 Modelling, Vol. 5, Pages 659-672: Impact of Volute Throat Area and Gap Width on the Hydraulic Performance of Low-Specific-Speed Centrifugal Pump

    Modelling doi: 10.3390/modelling5030035

    Authors: Muhammad Fasahat Khan Tim Gjernes Nicholas Guenther Jean-Pierre Hickey

    This paper investigates the influence of the volute geometry on the hydraulic performance of a low-specific-speed centrifugal pump using numerical simulations. The performance characteristics for the pump with the volute geometry designed using the constant velocity method show a significant discrepancy between the design point and the best efficiency point (BEP). This design methodology also results in a relatively flat head–capacity curve. These are both undesirable characteristics which can be mitigated by a reduction in the volute throat area. This design methodology also leads to a reduction in the power consumption and an increase in efficiency, especially at underload and design flow conditions. These impacts of the volute throat area on performance characteristics are investigated in terms of the change in internal flow characteristics due to the reduction in the volute throat area. Another aspect of the study is the impact of the width of the volute gap on performance characteristics. A reduction in the gap width results in a nearly vertical shift of the head–capacity curve, so that head delivered is higher across all the flow rates as the gap width is reduced. This is also accompanied by a slight improvement in efficiency under design flow and overload conditions. Numerical simulations are used to relate the change in performance characteristics with internal flow characteristics.

    ]]>
    Impact of Volute Throat Area and Gap Width on the Hydraulic Performance of Low-Specific-Speed Centrifugal Pump Muhammad Fasahat Khan Tim Gjernes Nicholas Guenther Jean-Pierre Hickey doi: 10.3390/modelling5030035 Modelling 2024-06-26 Modelling 2024-06-26 5 3
    Article
    659 10.3390/modelling5030035 https://www.mdpi.com/2673-3951/5/3/35
    Modelling, Vol. 5, Pages 642-658: Modeling and Optimization of Concrete Mixtures Using Machine Learning Estimators and Genetic Algorithms https://www.mdpi.com/2673-3951/5/3/34 This study presents a methodology to optimize concrete mixtures by integrating machine learning (ML) and genetic algorithms. ML models are used to predict compressive strength, while genetic algorithms optimize the mixture cost under quality constraints. Using a dataset of over 19,000 samples from a local ready-mix concrete producer, various predictive ML models were trained and evaluated regarding cost-effective solutions. The results show that the optimized mixtures meet the desired compressive strength range and are cost-efficient, thus having 50% of the solutions yielding a cost below 98% of the test cases. CatBoost emerged as the best ML technique, thereby achieving a mean absolute error (MAE) below 5 MPa. This combined approach enhances quality, reduces costs, and improves production efficiency in concrete manufacturing. 2024-06-24 Modelling, Vol. 5, Pages 642-658: Modeling and Optimization of Concrete Mixtures Using Machine Learning Estimators and Genetic Algorithms

    Modelling doi: 10.3390/modelling5030034

    Authors: Ana I. Oviedo Jorge M. Londoño John F. Vargas Carolina Zuluaga Ana Gómez

    This study presents a methodology to optimize concrete mixtures by integrating machine learning (ML) and genetic algorithms. ML models are used to predict compressive strength, while genetic algorithms optimize the mixture cost under quality constraints. Using a dataset of over 19,000 samples from a local ready-mix concrete producer, various predictive ML models were trained and evaluated regarding cost-effective solutions. The results show that the optimized mixtures meet the desired compressive strength range and are cost-efficient, thus having 50% of the solutions yielding a cost below 98% of the test cases. CatBoost emerged as the best ML technique, thereby achieving a mean absolute error (MAE) below 5 MPa. This combined approach enhances quality, reduces costs, and improves production efficiency in concrete manufacturing.

    ]]>
    Modeling and Optimization of Concrete Mixtures Using Machine Learning Estimators and Genetic Algorithms Ana I. Oviedo Jorge M. Londoño John F. Vargas Carolina Zuluaga Ana Gómez doi: 10.3390/modelling5030034 Modelling 2024-06-24 Modelling 2024-06-24 5 3
    Article
    642 10.3390/modelling5030034 https://www.mdpi.com/2673-3951/5/3/34
    Modelling, Vol. 5, Pages 625-641: Micromechanical Estimates Compared to FE-Based Methods for Modelling the Behaviour of Micro-Cracked Viscoelastic Materials https://www.mdpi.com/2673-3951/5/2/33 The purpose of this study is to investigate the effective behaviour of a micro-cracked material whose matrix bulk and shear moduli are ruled by a linear viscoelastic Burgers model. The analysis includes a detailed study of randomly oriented and distributed cracks displaying an overall isotropic behaviour, as well as aligned cracks resulting in a transversely isotropic medium. Effective material properties are approximated with the assumption that the homogenized equivalent medium exhibits the characteristics of a Burgers model, leading to the identification of short-term and long-term homogenized modules in the Laplace–Carson space through simplified formulations. The crucial advantage of this analytical technique consists in avoiding calculations of the inverse Laplace–Carson transform. The micromechanical estimates are validated through comparisons with FE numerical simulations on 3D microstructures generated with zero-thickness void cracks of disc shape. Intersections between randomly oriented cracks are accounted for, thereby highlighting a potential percolation phenomenon. The effects of micro-cracks on the material’s behaviour are then studied with the aim of providing high-performance creep models for macrostructure calculations at a moderate computation cost through the application of analytical homogenization techniques. 2024-06-20 Modelling, Vol. 5, Pages 625-641: Micromechanical Estimates Compared to FE-Based Methods for Modelling the Behaviour of Micro-Cracked Viscoelastic Materials

    Modelling doi: 10.3390/modelling5020033

    Authors: Sarah Abou Chakra Benoît Bary Eric Lemarchand Christophe Bourcier Sylvie Granet Jean Talandier

    The purpose of this study is to investigate the effective behaviour of a micro-cracked material whose matrix bulk and shear moduli are ruled by a linear viscoelastic Burgers model. The analysis includes a detailed study of randomly oriented and distributed cracks displaying an overall isotropic behaviour, as well as aligned cracks resulting in a transversely isotropic medium. Effective material properties are approximated with the assumption that the homogenized equivalent medium exhibits the characteristics of a Burgers model, leading to the identification of short-term and long-term homogenized modules in the Laplace–Carson space through simplified formulations. The crucial advantage of this analytical technique consists in avoiding calculations of the inverse Laplace–Carson transform. The micromechanical estimates are validated through comparisons with FE numerical simulations on 3D microstructures generated with zero-thickness void cracks of disc shape. Intersections between randomly oriented cracks are accounted for, thereby highlighting a potential percolation phenomenon. The effects of micro-cracks on the material’s behaviour are then studied with the aim of providing high-performance creep models for macrostructure calculations at a moderate computation cost through the application of analytical homogenization techniques.

    ]]>
    Micromechanical Estimates Compared to FE-Based Methods for Modelling the Behaviour of Micro-Cracked Viscoelastic Materials Sarah Abou Chakra Benoît Bary Eric Lemarchand Christophe Bourcier Sylvie Granet Jean Talandier doi: 10.3390/modelling5020033 Modelling 2024-06-20 Modelling 2024-06-20 5 2
    Article
    625 10.3390/modelling5020033 https://www.mdpi.com/2673-3951/5/2/33
    Modelling, Vol. 5, Pages 600-624: Mixing Enhancement Study in Axisymmetric Trapped-Vortex Combustor for Propane, Ammonia and Hydrogen https://www.mdpi.com/2673-3951/5/2/32 The trapped-vortex combustor (TVC) is an alternative combustor design to conventional aeroengine combustors. The separate fuel and air injection of this combustor and its compact design make it a perfect candidate for conventional fuel usage. Moreover, the performance of a trapped-vortex combustor with alternative fuels such as ammonia and hydrogen in the actual operating conditions of an aeroengine is not well understood. The present paper focused on the performance evaluation of TVCs with the futuristic fuels ammonia and hydrogen including under the realistic operating conditions of a combustor. The investigated fuels were injected into a cavity with 0-,15-, 30- and 45-degree transverse-angled air injectors to evaluate the mixing enhancement of the air and fuel under idle and low-power conditions. The mixing behavior of hydrogen showed a significant difference from the conventional fuel, i.e., propane. It was also noticed that the transverse injection of the air helped to improve the mixing efficiency as compared to the normal injection configuration. Mixing efficiency was higher for the 30- and 45-degree transverse-angled air injectors compared to the 0- and 15-degree transverse-angled air injectors. 2024-06-07 Modelling, Vol. 5, Pages 600-624: Mixing Enhancement Study in Axisymmetric Trapped-Vortex Combustor for Propane, Ammonia and Hydrogen

    Modelling doi: 10.3390/modelling5020032

    Authors: Heval Serhat Uluk Sam M. Dakka Kuldeep Singh

    The trapped-vortex combustor (TVC) is an alternative combustor design to conventional aeroengine combustors. The separate fuel and air injection of this combustor and its compact design make it a perfect candidate for conventional fuel usage. Moreover, the performance of a trapped-vortex combustor with alternative fuels such as ammonia and hydrogen in the actual operating conditions of an aeroengine is not well understood. The present paper focused on the performance evaluation of TVCs with the futuristic fuels ammonia and hydrogen including under the realistic operating conditions of a combustor. The investigated fuels were injected into a cavity with 0-,15-, 30- and 45-degree transverse-angled air injectors to evaluate the mixing enhancement of the air and fuel under idle and low-power conditions. The mixing behavior of hydrogen showed a significant difference from the conventional fuel, i.e., propane. It was also noticed that the transverse injection of the air helped to improve the mixing efficiency as compared to the normal injection configuration. Mixing efficiency was higher for the 30- and 45-degree transverse-angled air injectors compared to the 0- and 15-degree transverse-angled air injectors.

    ]]>
    Mixing Enhancement Study in Axisymmetric Trapped-Vortex Combustor for Propane, Ammonia and Hydrogen Heval Serhat Uluk Sam M. Dakka Kuldeep Singh doi: 10.3390/modelling5020032 Modelling 2024-06-07 Modelling 2024-06-07 5 2
    Article
    600 10.3390/modelling5020032 https://www.mdpi.com/2673-3951/5/2/32
    Modelling, Vol. 5, Pages 585-599: Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors https://www.mdpi.com/2673-3951/5/2/31 The aim of this paper is to assess the significant impact of using quantile analysis in multiple fields of scientific research . Here, we focus on estimating conditional quantile functions when the errors follow a GARMA (Generalized Auto-Regressive Moving Average) model. Our key theoretical contribution involves identifying the Quantile-Regression (QR) coefficients within the context of GARMA errors. We propose a modified maximum-likelihood estimation method using an EM algorithm to estimate the target coefficients and derive their statistical properties. The proposed procedure yields estimators that are strongly consistent and asymptotically normal under mild conditions. In order to evaluate the performance of the proposed estimators, a simulation study is conducted employing the minimum bias and Root Mean Square Error (RMSE) criterion. Furthermore, an empirical application is given to demonstrate the effectiveness of the proposed methodology in practice. 2024-06-04 Modelling, Vol. 5, Pages 585-599: Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors

    Modelling doi: 10.3390/modelling5020031

    Authors: Oumaima Essefiani Rachid El Halimi Said Hamdoune

    The aim of this paper is to assess the significant impact of using quantile analysis in multiple fields of scientific research . Here, we focus on estimating conditional quantile functions when the errors follow a GARMA (Generalized Auto-Regressive Moving Average) model. Our key theoretical contribution involves identifying the Quantile-Regression (QR) coefficients within the context of GARMA errors. We propose a modified maximum-likelihood estimation method using an EM algorithm to estimate the target coefficients and derive their statistical properties. The proposed procedure yields estimators that are strongly consistent and asymptotically normal under mild conditions. In order to evaluate the performance of the proposed estimators, a simulation study is conducted employing the minimum bias and Root Mean Square Error (RMSE) criterion. Furthermore, an empirical application is given to demonstrate the effectiveness of the proposed methodology in practice.

    ]]>
    Estimation Approach for a Linear Quantile-Regression Model with Long-Memory Stationary GARMA Errors Oumaima Essefiani Rachid El Halimi Said Hamdoune doi: 10.3390/modelling5020031 Modelling 2024-06-04 Modelling 2024-06-04 5 2
    Article
    585 10.3390/modelling5020031 https://www.mdpi.com/2673-3951/5/2/31
    Modelling, Vol. 5, Pages 569-584: Investigating Mechanical Response and Structural Integrity of Tubercle Leading Edge under Static Loads https://www.mdpi.com/2673-3951/5/2/30 This investigation into the aerodynamic efficiency and structural integrity of tubercle leading edges, inspired by the agile maneuverability of humpback whales, employs a multifaceted experimental and computational approach. By utilizing static load extensometer testing complemented by computational simulations, this study quantitatively assesses the impacts of unique wing geometries on aerodynamic forces and structural behavior. The experimental setup, involving a Wheatstone full-bridge circuit, measures the strain responses of tubercle-configured leading edges under static loads. These measured strains are converted into stress values through Hooke’s law, revealing a consistent linear relationship between the applied loads and induced strains, thereby validating the structural robustness. The experimental results indicate a linear strain increase with load application, demonstrating strain values ranging from 65 με under a load of 584 g to 249 με under a load of 2122 g. These findings confirm the structural integrity of the designs across varying load conditions. Discrepancies noted between the experimental data and simulation outputs, however, underscore the effects of 3D printing imperfections on the structural analysis. Despite these manufacturing challenges, the results endorse the tubercle leading edges’ capacity to enhance aerodynamic performance and structural resilience. This study enriches the understanding of bio-inspired aerodynamic designs and supports their potential in practical fluid mechanics applications, suggesting directions for future research on manufacturing optimizations. 2024-05-25 Modelling, Vol. 5, Pages 569-584: Investigating Mechanical Response and Structural Integrity of Tubercle Leading Edge under Static Loads

    Modelling doi: 10.3390/modelling5020030

    Authors: Ali Esmaeili Hossein Jabbari Hadis Zehtabzadeh Majid Zamiri

    This investigation into the aerodynamic efficiency and structural integrity of tubercle leading edges, inspired by the agile maneuverability of humpback whales, employs a multifaceted experimental and computational approach. By utilizing static load extensometer testing complemented by computational simulations, this study quantitatively assesses the impacts of unique wing geometries on aerodynamic forces and structural behavior. The experimental setup, involving a Wheatstone full-bridge circuit, measures the strain responses of tubercle-configured leading edges under static loads. These measured strains are converted into stress values through Hooke’s law, revealing a consistent linear relationship between the applied loads and induced strains, thereby validating the structural robustness. The experimental results indicate a linear strain increase with load application, demonstrating strain values ranging from 65 με under a load of 584 g to 249 με under a load of 2122 g. These findings confirm the structural integrity of the designs across varying load conditions. Discrepancies noted between the experimental data and simulation outputs, however, underscore the effects of 3D printing imperfections on the structural analysis. Despite these manufacturing challenges, the results endorse the tubercle leading edges’ capacity to enhance aerodynamic performance and structural resilience. This study enriches the understanding of bio-inspired aerodynamic designs and supports their potential in practical fluid mechanics applications, suggesting directions for future research on manufacturing optimizations.

    ]]>
    Investigating Mechanical Response and Structural Integrity of Tubercle Leading Edge under Static Loads Ali Esmaeili Hossein Jabbari Hadis Zehtabzadeh Majid Zamiri doi: 10.3390/modelling5020030 Modelling 2024-05-25 Modelling 2024-05-25 5 2
    Article
    569 10.3390/modelling5020030 https://www.mdpi.com/2673-3951/5/2/30
    Modelling, Vol. 5, Pages 549-568: A State-Based Language for Enhanced Video Surveillance Modeling (SEL) https://www.mdpi.com/2673-3951/5/2/29 SEL, a State-based Language for Video Surveillance Modeling, is a formal language designed to represent and identify activities in surveillance systems through scenario semantics and the creation of motion primitives structured in programs. Motion primitives represent the temporal evolution of motion evidence. They are the most basic motion structures detected as motion evidence, including operators such as sequence, parallel, and concurrency, which indicate trajectory evolution, simultaneity, and synchronization. SEL is a very expressive language that characterizes interactions by describing the relationships between motion primitives. These interactions determine the scenario’s activity and meaning. An experimental model is constructed to demonstrate the value of SEL, incorporating challenging activities in surveillance systems. This approach assesses the language’s suitability for describing complicated tasks. 2024-05-24 Modelling, Vol. 5, Pages 549-568: A State-Based Language for Enhanced Video Surveillance Modeling (SEL)

    Modelling doi: 10.3390/modelling5020029

    Authors: Selene Ramirez-Rosales Luis-Antonio Diaz-Jimenez Daniel Canton-Enriquez Jorge-Luis Perez-Ramos Herlindo Hernandez-Ramirez Ana-Marcela Herrera-Navarro Gabriela Xicotencatl-Ramirez Hugo Jimenez-Hernandez

    SEL, a State-based Language for Video Surveillance Modeling, is a formal language designed to represent and identify activities in surveillance systems through scenario semantics and the creation of motion primitives structured in programs. Motion primitives represent the temporal evolution of motion evidence. They are the most basic motion structures detected as motion evidence, including operators such as sequence, parallel, and concurrency, which indicate trajectory evolution, simultaneity, and synchronization. SEL is a very expressive language that characterizes interactions by describing the relationships between motion primitives. These interactions determine the scenario’s activity and meaning. An experimental model is constructed to demonstrate the value of SEL, incorporating challenging activities in surveillance systems. This approach assesses the language’s suitability for describing complicated tasks.

    ]]>
    A State-Based Language for Enhanced Video Surveillance Modeling (SEL) Selene Ramirez-Rosales Luis-Antonio Diaz-Jimenez Daniel Canton-Enriquez Jorge-Luis Perez-Ramos Herlindo Hernandez-Ramirez Ana-Marcela Herrera-Navarro Gabriela Xicotencatl-Ramirez Hugo Jimenez-Hernandez doi: 10.3390/modelling5020029 Modelling 2024-05-24 Modelling 2024-05-24 5 2
    Article
    549 10.3390/modelling5020029 https://www.mdpi.com/2673-3951/5/2/29
    Modelling, Vol. 5, Pages 530-548: Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution https://www.mdpi.com/2673-3951/5/2/28 The modeling of many problems of practical interest leads to nonlinear ill-posed equations (for example, the parameter identification problem (see the Numerical section)). In this article, we introduce a new source condition (SC) and a new parameter choice strategy (PCS) for the Tikhonov regularization (TR) method for nonlinear ill-posed problems. The new PCS is introduced using a new SC to compute the regularization parameter (RP) before computing the regularized solution. The theoretical results are verified using a numerical example. 2024-05-13 Modelling, Vol. 5, Pages 530-548: Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution

    Modelling doi: 10.3390/modelling5020028

    Authors: Santhosh George Jidesh Padikkal Ajil Kunnarath Ioannis K. Argyros Samundra Regmi

    The modeling of many problems of practical interest leads to nonlinear ill-posed equations (for example, the parameter identification problem (see the Numerical section)). In this article, we introduce a new source condition (SC) and a new parameter choice strategy (PCS) for the Tikhonov regularization (TR) method for nonlinear ill-posed problems. The new PCS is introduced using a new SC to compute the regularization parameter (RP) before computing the regularized solution. The theoretical results are verified using a numerical example.

    ]]>
    Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution Santhosh George Jidesh Padikkal Ajil Kunnarath Ioannis K. Argyros Samundra Regmi doi: 10.3390/modelling5020028 Modelling 2024-05-13 Modelling 2024-05-13 5 2
    Article
    530 10.3390/modelling5020028 https://www.mdpi.com/2673-3951/5/2/28
    Modelling, Vol. 5, Pages 502-529: Micro-Mechanical Hyperelastic Modelling for (Un)Filled Polyurethane with Considerations of Strain Amplification https://www.mdpi.com/2673-3951/5/2/27 Polyurethane (PU) is a very versatile material in engineering applications, whose mechanical properties can be tailored by the introduction of active fillers. The current research aims to (i) investigate the effect of active fillers with varying filler loads on the mechanical properties of a PU system and (ii) develop a micro-mechanical model to describe the hyperelastic behavior of (un)filled PU. Three models are taken into consideration: without strain amplification, with constant strain amplification, and with a deformation-dependent strain amplification. The measured uniaxial stress–strain data of the filled PU nanocomposites reveal clear reinforcement due to the incorporation of carbon black at 5, 10 and 20 wt%. In low concentration (1 wt%), for two different grades of carbon black and a fumed silica, it results in a reduction in the mechanical properties. The micro-mechanical model without strain amplification has a good agreement with the measured stress–strain curves at low concentrations of fillers (1 wt%). For higher filled concentrations (5–15 wt%), the micro-mechanical model with constant strain amplification leads to a better prediction performance. For samples with a larger filler volume fraction (20 wt%) and for a commercial adhesive, the model with a deformation-dependent strain amplification effect leads to the best predictions, i.e., highest R2 regarding curve fitting. 2024-04-24 Modelling, Vol. 5, Pages 502-529: Micro-Mechanical Hyperelastic Modelling for (Un)Filled Polyurethane with Considerations of Strain Amplification

    Modelling doi: 10.3390/modelling5020027

    Authors: Saman H. Razavi Vinicius C. Beber Bernd Mayer

    Polyurethane (PU) is a very versatile material in engineering applications, whose mechanical properties can be tailored by the introduction of active fillers. The current research aims to (i) investigate the effect of active fillers with varying filler loads on the mechanical properties of a PU system and (ii) develop a micro-mechanical model to describe the hyperelastic behavior of (un)filled PU. Three models are taken into consideration: without strain amplification, with constant strain amplification, and with a deformation-dependent strain amplification. The measured uniaxial stress–strain data of the filled PU nanocomposites reveal clear reinforcement due to the incorporation of carbon black at 5, 10 and 20 wt%. In low concentration (1 wt%), for two different grades of carbon black and a fumed silica, it results in a reduction in the mechanical properties. The micro-mechanical model without strain amplification has a good agreement with the measured stress–strain curves at low concentrations of fillers (1 wt%). For higher filled concentrations (5–15 wt%), the micro-mechanical model with constant strain amplification leads to a better prediction performance. For samples with a larger filler volume fraction (20 wt%) and for a commercial adhesive, the model with a deformation-dependent strain amplification effect leads to the best predictions, i.e., highest R2 regarding curve fitting.

    ]]>
    Micro-Mechanical Hyperelastic Modelling for (Un)Filled Polyurethane with Considerations of Strain Amplification Saman H. Razavi Vinicius C. Beber Bernd Mayer doi: 10.3390/modelling5020027 Modelling 2024-04-24 Modelling 2024-04-24 5 2
    Article
    502 10.3390/modelling5020027 https://www.mdpi.com/2673-3951/5/2/27
    Modelling, Vol. 5, Pages 483-501: Numerical Simulation of the Interaction between a Planar Shock Wave and a Cylindrical Bubble https://www.mdpi.com/2673-3951/5/2/26 Three-dimensional (3D) computational fluid dynamics (CFD) simulations have been carried out to investigate the complex interaction of a planar shock wave (Ma = 1.22) with a cylindrical bubble. The unsteady Reynolds-averaged Navier–Stokes (URANS) approach with a level set coupled with volume of fluid (LSVOF) method has been applied in the present study. The predicted velocities of refracted wave, transmitted wave, upstream interface, downstream interface, jet, and vortex filaments are in very good agreement with the experimental data. The predicted non-dimensional bubble and vortex velocities also have great concordance with the experimental data compared with a simple model of shock-induced Rayleigh–Taylor instability (i.e., Richtmyer–Meshkov instability) and other theoretical models. The simulated changes in the bubble shape and size (length and width) against time agree very well with the experimental results. Comprehensive flow analysis has shown the shock–bubble interaction (SBI) process clearly from the onset of bubble compression up to the formation of vortex filaments, especially elucidating the mechanism on the air–jet formation and its development. It is demonstrated for the first time that turbulence is generated at the early phase of the shock cylindrical bubble interaction process, with the maximum turbulence intensity reaching about 20% around the vortex filament regions at the later phase of the interaction process. 2024-04-16 Modelling, Vol. 5, Pages 483-501: Numerical Simulation of the Interaction between a Planar Shock Wave and a Cylindrical Bubble

    Modelling doi: 10.3390/modelling5020026

    Authors: Solomon Onwuegbu Zhiyin Yang Jianfei Xie

    Three-dimensional (3D) computational fluid dynamics (CFD) simulations have been carried out to investigate the complex interaction of a planar shock wave (Ma = 1.22) with a cylindrical bubble. The unsteady Reynolds-averaged Navier–Stokes (URANS) approach with a level set coupled with volume of fluid (LSVOF) method has been applied in the present study. The predicted velocities of refracted wave, transmitted wave, upstream interface, downstream interface, jet, and vortex filaments are in very good agreement with the experimental data. The predicted non-dimensional bubble and vortex velocities also have great concordance with the experimental data compared with a simple model of shock-induced Rayleigh–Taylor instability (i.e., Richtmyer–Meshkov instability) and other theoretical models. The simulated changes in the bubble shape and size (length and width) against time agree very well with the experimental results. Comprehensive flow analysis has shown the shock–bubble interaction (SBI) process clearly from the onset of bubble compression up to the formation of vortex filaments, especially elucidating the mechanism on the air–jet formation and its development. It is demonstrated for the first time that turbulence is generated at the early phase of the shock cylindrical bubble interaction process, with the maximum turbulence intensity reaching about 20% around the vortex filament regions at the later phase of the interaction process.

    ]]>
    Numerical Simulation of the Interaction between a Planar Shock Wave and a Cylindrical Bubble Solomon Onwuegbu Zhiyin Yang Jianfei Xie doi: 10.3390/modelling5020026 Modelling 2024-04-16 Modelling 2024-04-16 5 2
    Article
    483 10.3390/modelling5020026 https://www.mdpi.com/2673-3951/5/2/26
    Modelling, Vol. 5, Pages 458-482: Integrated Modeling of Coastal Processes Driven by an Advanced Mild Slope Wave Model https://www.mdpi.com/2673-3951/5/2/25 Numerical modeling of wave transformation, hydrodynamics, and morphodynamics in coastal regions holds paramount significance for combating coastal erosion by evaluating and optimizing various coastal protection structures. This study aims to present an integration of numerical models to accurately simulate the coastal processes with the presence of coastal and harbor structures. Specifically, integrated modeling employs an advanced mild slope model as the main driver, which is capable of describing all the wave transformation phenomena, including wave reflection. This model provides radiation stresses as inputs to a hydrodynamic model based on Reynolds-averaged Navier–Stokes equations to simulate nearshore currents. Ultimately, these models feed an additional model that can simulate longshore sediment transport and bed level changes. The models are validated against experimental measurements, including energy dissipation due to bottom friction and wave breaking; combined refraction, diffraction, and breaking over a submerged shoal; wave transformation and wave-generated currents over submerged breakwaters; and wave, currents, and sediment transport fields over a varying bathymetry. The models exhibit satisfactory performance in simulating all considered cases, establishing them as efficient and reliable integrated tools for engineering applications in real coastal areas. Moreover, leveraging the validated models, a numerical investigation is undertaken to assess the effects of wave reflection on a seawall on coastal processes for two ideal beach configurations—one with a steeper slope of 1:10 and another with a milder slope of 1:50. The numerical investigation reveals that the presence of reflected waves, particularly in milder bed slopes, significantly influences sediment transport, emphasizing the importance of employing a wave model that takes into account wave reflection as the primary driver for integrated modeling of coastal processes. 2024-04-11 Modelling, Vol. 5, Pages 458-482: Integrated Modeling of Coastal Processes Driven by an Advanced Mild Slope Wave Model

    Modelling doi: 10.3390/modelling5020025

    Authors: Michalis K. Chondros Anastasios S. Metallinos Andreas G. Papadimitriou

    Numerical modeling of wave transformation, hydrodynamics, and morphodynamics in coastal regions holds paramount significance for combating coastal erosion by evaluating and optimizing various coastal protection structures. This study aims to present an integration of numerical models to accurately simulate the coastal processes with the presence of coastal and harbor structures. Specifically, integrated modeling employs an advanced mild slope model as the main driver, which is capable of describing all the wave transformation phenomena, including wave reflection. This model provides radiation stresses as inputs to a hydrodynamic model based on Reynolds-averaged Navier–Stokes equations to simulate nearshore currents. Ultimately, these models feed an additional model that can simulate longshore sediment transport and bed level changes. The models are validated against experimental measurements, including energy dissipation due to bottom friction and wave breaking; combined refraction, diffraction, and breaking over a submerged shoal; wave transformation and wave-generated currents over submerged breakwaters; and wave, currents, and sediment transport fields over a varying bathymetry. The models exhibit satisfactory performance in simulating all considered cases, establishing them as efficient and reliable integrated tools for engineering applications in real coastal areas. Moreover, leveraging the validated models, a numerical investigation is undertaken to assess the effects of wave reflection on a seawall on coastal processes for two ideal beach configurations—one with a steeper slope of 1:10 and another with a milder slope of 1:50. The numerical investigation reveals that the presence of reflected waves, particularly in milder bed slopes, significantly influences sediment transport, emphasizing the importance of employing a wave model that takes into account wave reflection as the primary driver for integrated modeling of coastal processes.

    ]]>
    Integrated Modeling of Coastal Processes Driven by an Advanced Mild Slope Wave Model Michalis K. Chondros Anastasios S. Metallinos Andreas G. Papadimitriou doi: 10.3390/modelling5020025 Modelling 2024-04-11 Modelling 2024-04-11 5 2
    Article
    458 10.3390/modelling5020025 https://www.mdpi.com/2673-3951/5/2/25
    Modelling, Vol. 5, Pages 438-457: Forecasting Future Research Trends in the Construction Engineering and Management Domain Using Machine Learning and Social Network Analysis https://www.mdpi.com/2673-3951/5/2/24 Construction Engineering and Management (CEM) is a broad domain with publications covering interrelated subdisciplines and considered a key source of knowledge sharing. Previous studies used scientometric methods to assess the current impact of CEM publications; however, there is a need to predict future citations of CEM publications to identify the expected high-impact trends in the future and guide new research efforts. To tackle this gap in the literature, the authors conducted a study using Machine Learning (ML) algorithms and Social Network Analysis (SNA) to predict CEM-related citation metrics. Using a dataset of 93,868 publications, the authors trained and tested two machine learning classification algorithms: Random Forest and XGBoost. Validation of the RF and XGBoost resulted in a balanced accuracy of 79.1% and 79.5%, respectively. Accordingly, XGBoost was selected. Testing of the XGBoost model revealed a balanced accuracy of 80.71%. Using SNA, it was found that while the top CEM subdisciplines in terms of the number of predicted impactful papers are “Project planning and design”, “Organizational issues”, and “Information technologies, robotics, and automation”; the lowest was “Legal and contractual issues”. This paper contributes to the body of knowledge by studying the citation level, strength, and interconnectivity between CEM subdisciplines as well as identifying areas more likely to result in highly cited publications. 2024-04-06 Modelling, Vol. 5, Pages 438-457: Forecasting Future Research Trends in the Construction Engineering and Management Domain Using Machine Learning and Social Network Analysis

    Modelling doi: 10.3390/modelling5020024

    Authors: Gasser G. Ali Islam H. El-adaway Muaz O. Ahmed Radwa Eissa Mohamad Abdul Nabi Tamima Elbashbishy Ramy Khalef

    Construction Engineering and Management (CEM) is a broad domain with publications covering interrelated subdisciplines and considered a key source of knowledge sharing. Previous studies used scientometric methods to assess the current impact of CEM publications; however, there is a need to predict future citations of CEM publications to identify the expected high-impact trends in the future and guide new research efforts. To tackle this gap in the literature, the authors conducted a study using Machine Learning (ML) algorithms and Social Network Analysis (SNA) to predict CEM-related citation metrics. Using a dataset of 93,868 publications, the authors trained and tested two machine learning classification algorithms: Random Forest and XGBoost. Validation of the RF and XGBoost resulted in a balanced accuracy of 79.1% and 79.5%, respectively. Accordingly, XGBoost was selected. Testing of the XGBoost model revealed a balanced accuracy of 80.71%. Using SNA, it was found that while the top CEM subdisciplines in terms of the number of predicted impactful papers are “Project planning and design”, “Organizational issues”, and “Information technologies, robotics, and automation”; the lowest was “Legal and contractual issues”. This paper contributes to the body of knowledge by studying the citation level, strength, and interconnectivity between CEM subdisciplines as well as identifying areas more likely to result in highly cited publications.

    ]]>
    Forecasting Future Research Trends in the Construction Engineering and Management Domain Using Machine Learning and Social Network Analysis Gasser G. Ali Islam H. El-adaway Muaz O. Ahmed Radwa Eissa Mohamad Abdul Nabi Tamima Elbashbishy Ramy Khalef doi: 10.3390/modelling5020024 Modelling 2024-04-06 Modelling 2024-04-06 5 2
    Article
    438 10.3390/modelling5020024 https://www.mdpi.com/2673-3951/5/2/24
    Modelling, Vol. 5, Pages 424-437: Numerical Analysis of Crack Propagation in an Aluminum Alloy under Random Load Spectra https://www.mdpi.com/2673-3951/5/2/23 This study develops a rapid algorithm coupled with the finite element method to predict the fatigue crack propagation process and select the enhancement factor for the equivalent random load spectrum of accelerated fatigue tests. The proposed algorithm is validated by several fatigue tests of an aluminum alloy under the accelerated random load spectra. In the validation process, two kinds of panels with different geometries and sizes are used to calculate the stress intensity factor, critical crack length, and crack propagation life. The simulated and experimental findings indicate that when the aluminum alloy is in a low plasticity state, the crack propagation life exhibits a linear relationship with the acceleration factor. When the aluminum alloy is in a high plasticity state, this study proposes an empirical formula to calculate the equivalent stress intensity factor and crack propagation life. The normalized empirical formula is independent of the geometry and size of different samples, although the fracture processes are different in the two kinds of panels used in our study. Overall, the numerical method proposed in this paper can be applied to predict the fatigue crack propagation life for the random spectrum of large samples based on the results of the simulated accelerated crack propagation process and the accelerated fatigue tests of small samples to reduce the cost and time of the testing. 2024-04-04 Modelling, Vol. 5, Pages 424-437: Numerical Analysis of Crack Propagation in an Aluminum Alloy under Random Load Spectra

    Modelling doi: 10.3390/modelling5020023

    Authors: Fangli Wang Jie Zheng Kai Liu Mingbo Tong Jinyu Zhou

    This study develops a rapid algorithm coupled with the finite element method to predict the fatigue crack propagation process and select the enhancement factor for the equivalent random load spectrum of accelerated fatigue tests. The proposed algorithm is validated by several fatigue tests of an aluminum alloy under the accelerated random load spectra. In the validation process, two kinds of panels with different geometries and sizes are used to calculate the stress intensity factor, critical crack length, and crack propagation life. The simulated and experimental findings indicate that when the aluminum alloy is in a low plasticity state, the crack propagation life exhibits a linear relationship with the acceleration factor. When the aluminum alloy is in a high plasticity state, this study proposes an empirical formula to calculate the equivalent stress intensity factor and crack propagation life. The normalized empirical formula is independent of the geometry and size of different samples, although the fracture processes are different in the two kinds of panels used in our study. Overall, the numerical method proposed in this paper can be applied to predict the fatigue crack propagation life for the random spectrum of large samples based on the results of the simulated accelerated crack propagation process and the accelerated fatigue tests of small samples to reduce the cost and time of the testing.

    ]]>
    Numerical Analysis of Crack Propagation in an Aluminum Alloy under Random Load Spectra Fangli Wang Jie Zheng Kai Liu Mingbo Tong Jinyu Zhou doi: 10.3390/modelling5020023 Modelling 2024-04-04 Modelling 2024-04-04 5 2
    Article
    424 10.3390/modelling5020023 https://www.mdpi.com/2673-3951/5/2/23
    Modelling, Vol. 5, Pages 410-423: On Mechanical and Chaotic Problem Modeling and Numerical Simulation Using Electric Networks https://www.mdpi.com/2673-3951/5/2/22 After reviewing the use of electrical circuit elements to model dynamic processes or the operation of devices or equipment, both in real laboratory implementations and through ideal circuits implemented in simulation software, a network model design protocol is proposed. This approach, following the basic rules of circuit theory, makes use of controlled generators to implement any type of nonlinearity contained in the governing equations. Such a protocol constitutes an interesting educational tool that makes it possible for nonexpert students in mathematics to design and numerically simulate complex physical processes. Three applications to mechanical and chaotic problems are presented to illustrate the versatility of the proposed protocol. 2024-03-25 Modelling, Vol. 5, Pages 410-423: On Mechanical and Chaotic Problem Modeling and Numerical Simulation Using Electric Networks

    Modelling doi: 10.3390/modelling5020022

    Authors: Pedro Aráez José Antonio Jiménez-Valera Iván Alhama

    After reviewing the use of electrical circuit elements to model dynamic processes or the operation of devices or equipment, both in real laboratory implementations and through ideal circuits implemented in simulation software, a network model design protocol is proposed. This approach, following the basic rules of circuit theory, makes use of controlled generators to implement any type of nonlinearity contained in the governing equations. Such a protocol constitutes an interesting educational tool that makes it possible for nonexpert students in mathematics to design and numerically simulate complex physical processes. Three applications to mechanical and chaotic problems are presented to illustrate the versatility of the proposed protocol.

    ]]>
    On Mechanical and Chaotic Problem Modeling and Numerical Simulation Using Electric Networks Pedro Aráez José Antonio Jiménez-Valera Iván Alhama doi: 10.3390/modelling5020022 Modelling 2024-03-25 Modelling 2024-03-25 5 2
    Article
    410 10.3390/modelling5020022 https://www.mdpi.com/2673-3951/5/2/22
    Modelling, Vol. 5, Pages 392-409: Computational Modelling of Intra-Module Connections and Their Influence on the Robustness of a Steel Corner-Supported Volumetric Module https://www.mdpi.com/2673-3951/5/1/21 This paper investigates the robustness of a single 3D volumetric corner-supported module made of square hollow-section (SHS) columns. Typically, the moment–rotation (M-θ) behaviour of connections within the module (intra-module) is assumed to be fully rigid rather than semi-rigid, resulting in inaccurate assessment (i.e., overestimated vertical stiffness) during extreme loading events, such as progressive collapse. The intra-module connections are not capable of rigidly transferring the moment from the beams to the SHS columns. In this paper, a computationally intensive shell element model (SEM) of the module frame is created. The M-θ relationship of the intra-module connections in the SEM is firstly validated against test results by others and then replicated in a new simplified phenomenological beam element model (BEM), using nonlinear spring elements to capture the M-θ relationship. Comparing the structural behaviour of the SEM and BEM, under notional support removal, shows that the proposed BEM with semi-rigid connections (SR-BEM) agrees well with the validated SEM and requires substantially lower modelling time (98.7% lower) and computational effort (97.4% less RAM). When compared to a BEM with the typically modelled fully rigid intra-module connections (FR-BEM), the vertical displacement in the SR-BEM is at least 16% higher. The results demonstrate the importance of an accurate assessment of framing rotational stiffness and the benefits of a computationally efficient model. 2024-03-21 Modelling, Vol. 5, Pages 392-409: Computational Modelling of Intra-Module Connections and Their Influence on the Robustness of a Steel Corner-Supported Volumetric Module

    Modelling doi: 10.3390/modelling5010021

    Authors: Si Hwa Heng David Hyland Michael Hough Daniel McCrum

    This paper investigates the robustness of a single 3D volumetric corner-supported module made of square hollow-section (SHS) columns. Typically, the moment–rotation (M-θ) behaviour of connections within the module (intra-module) is assumed to be fully rigid rather than semi-rigid, resulting in inaccurate assessment (i.e., overestimated vertical stiffness) during extreme loading events, such as progressive collapse. The intra-module connections are not capable of rigidly transferring the moment from the beams to the SHS columns. In this paper, a computationally intensive shell element model (SEM) of the module frame is created. The M-θ relationship of the intra-module connections in the SEM is firstly validated against test results by others and then replicated in a new simplified phenomenological beam element model (BEM), using nonlinear spring elements to capture the M-θ relationship. Comparing the structural behaviour of the SEM and BEM, under notional support removal, shows that the proposed BEM with semi-rigid connections (SR-BEM) agrees well with the validated SEM and requires substantially lower modelling time (98.7% lower) and computational effort (97.4% less RAM). When compared to a BEM with the typically modelled fully rigid intra-module connections (FR-BEM), the vertical displacement in the SR-BEM is at least 16% higher. The results demonstrate the importance of an accurate assessment of framing rotational stiffness and the benefits of a computationally efficient model.

    ]]>
    Computational Modelling of Intra-Module Connections and Their Influence on the Robustness of a Steel Corner-Supported Volumetric Module Si Hwa Heng David Hyland Michael Hough Daniel McCrum doi: 10.3390/modelling5010021 Modelling 2024-03-21 Modelling 2024-03-21 5 1
    Article
    392 10.3390/modelling5010021 https://www.mdpi.com/2673-3951/5/1/21
    Modelling, Vol. 5, Pages 367-391: A CALPHAD-Informed Enthalpy Method for Multicomponent Alloy Systems with Phase Transitions https://www.mdpi.com/2673-3951/5/1/20 Solid–liquid phase transitions of metals and alloys play an important role in many technical processes. Therefore, corresponding numerical process simulations need adequate models. The enthalpy method is the current state-of-the-art approach for this task. However, this method has some limitations regarding multicomponent alloys as it does not consider the enthalpy of mixing, for example. In this work, we present a novel CALPHAD-informed version of the enthalpy method that removes these drawbacks. In addition, special attention is given to the handling of polymorphic as well as solid–liquid phase transitions. Efficient and robust algorithms for the conversion between enthalpy and temperature were developed. We demonstrate the capabilities of the presented method using two different implementations: a lattice Boltzmann and a finite difference solver. We proof the correct behaviour of the developed method by different validation scenarios. Finally, the model is applied to electron beam powder bed fusion—a modern additive manufacturing process for metals and alloys that allows for different powder mixtures to be alloyed in situ to produce complex engineering parts. We reveal that the enthalpy of mixing has a significant effect on the temperature and lifetime of the melt pool and thus on the part properties. 2024-03-08 Modelling, Vol. 5, Pages 367-391: A CALPHAD-Informed Enthalpy Method for Multicomponent Alloy Systems with Phase Transitions

    Modelling doi: 10.3390/modelling5010020

    Authors: Robert Scherr Philipp Liepold Matthias Markl Carolin Körner

    Solid–liquid phase transitions of metals and alloys play an important role in many technical processes. Therefore, corresponding numerical process simulations need adequate models. The enthalpy method is the current state-of-the-art approach for this task. However, this method has some limitations regarding multicomponent alloys as it does not consider the enthalpy of mixing, for example. In this work, we present a novel CALPHAD-informed version of the enthalpy method that removes these drawbacks. In addition, special attention is given to the handling of polymorphic as well as solid–liquid phase transitions. Efficient and robust algorithms for the conversion between enthalpy and temperature were developed. We demonstrate the capabilities of the presented method using two different implementations: a lattice Boltzmann and a finite difference solver. We proof the correct behaviour of the developed method by different validation scenarios. Finally, the model is applied to electron beam powder bed fusion—a modern additive manufacturing process for metals and alloys that allows for different powder mixtures to be alloyed in situ to produce complex engineering parts. We reveal that the enthalpy of mixing has a significant effect on the temperature and lifetime of the melt pool and thus on the part properties.

    ]]>
    A CALPHAD-Informed Enthalpy Method for Multicomponent Alloy Systems with Phase Transitions Robert Scherr Philipp Liepold Matthias Markl Carolin Körner doi: 10.3390/modelling5010020 Modelling 2024-03-08 Modelling 2024-03-08 5 1
    Article
    367 10.3390/modelling5010020 https://www.mdpi.com/2673-3951/5/1/20
    Modelling, Vol. 5, Pages 352-366: Effects of Chemical Short-Range Order and Temperature on Basic Structure Parameters and Stacking Fault Energies in Multi-Principal Element Alloys https://www.mdpi.com/2673-3951/5/1/19 In the realm of advanced material science, multi-principal element alloys (MPEAs) have emerged as a focal point due to their exceptional mechanical properties and adaptability for high-performance applications. This study embarks on an extensive investigation of four MPEAs—CoCrNi, MoNbTa, HfNbTaTiZr, and HfMoNbTaTi—alongside key pure metals (Mo, Nb, Ta, Ni) to unveil their structural and mechanical characteristics. Utilizing a blend of molecular statics and hybrid molecular dynamics/Monte Carlo simulations, the research delves into the impact of chemical short-range order (CSRO) and thermal effects on the fundamental structural parameters and stacking fault energies in these alloys. The study systematically analyzes quantities such as lattice parameters, elastic constants (C11, C12, and C44), and generalized stacking fault energies (GSFEs) across two distinct structures: random and CSRO. These properties are then evaluated at diverse temperatures (0, 300, 600, 900, 1200 K), offering a comprehensive understanding of temperature’s influence on material behavior. For CSRO, CoCrNi was annealed at 350 K and MoNbTa at 300 K, while both HfMoNbTaTi and HfNbTaTiZr were annealed at 300 K, 600 K, and 900 K, respectively. The results indicate that the lattice parameter increases with temperature, reflecting typical thermal expansion behavior. In contrast, both elastic constants and GSFE decrease with rising temperature, suggesting a reduction in resistance to stability and dislocation motion as thermal agitation intensifies. Notably, MPEAs with CSRO structures exhibit higher stiffness and GSFEs compared to their randomly structured counterparts, demonstrating the significant role of atomic ordering in enhancing material strength. 2024-02-28 Modelling, Vol. 5, Pages 352-366: Effects of Chemical Short-Range Order and Temperature on Basic Structure Parameters and Stacking Fault Energies in Multi-Principal Element Alloys

    Modelling doi: 10.3390/modelling5010019

    Authors: Subah Mubassira Wu-Rong Jian Shuozhi Xu

    In the realm of advanced material science, multi-principal element alloys (MPEAs) have emerged as a focal point due to their exceptional mechanical properties and adaptability for high-performance applications. This study embarks on an extensive investigation of four MPEAs—CoCrNi, MoNbTa, HfNbTaTiZr, and HfMoNbTaTi—alongside key pure metals (Mo, Nb, Ta, Ni) to unveil their structural and mechanical characteristics. Utilizing a blend of molecular statics and hybrid molecular dynamics/Monte Carlo simulations, the research delves into the impact of chemical short-range order (CSRO) and thermal effects on the fundamental structural parameters and stacking fault energies in these alloys. The study systematically analyzes quantities such as lattice parameters, elastic constants (C11, C12, and C44), and generalized stacking fault energies (GSFEs) across two distinct structures: random and CSRO. These properties are then evaluated at diverse temperatures (0, 300, 600, 900, 1200 K), offering a comprehensive understanding of temperature’s influence on material behavior. For CSRO, CoCrNi was annealed at 350 K and MoNbTa at 300 K, while both HfMoNbTaTi and HfNbTaTiZr were annealed at 300 K, 600 K, and 900 K, respectively. The results indicate that the lattice parameter increases with temperature, reflecting typical thermal expansion behavior. In contrast, both elastic constants and GSFE decrease with rising temperature, suggesting a reduction in resistance to stability and dislocation motion as thermal agitation intensifies. Notably, MPEAs with CSRO structures exhibit higher stiffness and GSFEs compared to their randomly structured counterparts, demonstrating the significant role of atomic ordering in enhancing material strength.

    ]]>
    Effects of Chemical Short-Range Order and Temperature on Basic Structure Parameters and Stacking Fault Energies in Multi-Principal Element Alloys Subah Mubassira Wu-Rong Jian Shuozhi Xu doi: 10.3390/modelling5010019 Modelling 2024-02-28 Modelling 2024-02-28 5 1
    Article
    352 10.3390/modelling5010019 https://www.mdpi.com/2673-3951/5/1/19
    Modelling, Vol. 5, Pages 339-351: Modeling and Simulation of a Planar Permanent Magnet On-Chip Power Inductor https://www.mdpi.com/2673-3951/5/1/18 The on-chip integration of a power inductor together with other power converter components of small sizes and high-saturation currents, while maintaining a desired or high inductance value, is here pursued. The use of soft magnetic cores increases inductance density but results in a reduced saturation current. This article presents a 3D physical model and a magnetic circuit model for an integrated on-chip power inductor (OPI) to double the saturation current using permanent magnet (PM) material. A ~50 nH, 7.5 A spiral permanent magnet on-chip power inductor (PMOI) is here designed, and a 3D physical model is then developed and simulated using the ANSYS®/Maxwell® software package (version 2017.1). The 3D physical model simulation results agree with the presented magnetic circuit model, and show that in the example PMOI design, the addition of the PM increases the saturation current of the OPI from 4 A to 7.5 A, while the size and inductance value remain unchanged. 2024-02-22 Modelling, Vol. 5, Pages 339-351: Modeling and Simulation of a Planar Permanent Magnet On-Chip Power Inductor

    Modelling doi: 10.3390/modelling5010018

    Authors: Jaber A. Abu Qahouq Mohammad K. Al-Smadi

    The on-chip integration of a power inductor together with other power converter components of small sizes and high-saturation currents, while maintaining a desired or high inductance value, is here pursued. The use of soft magnetic cores increases inductance density but results in a reduced saturation current. This article presents a 3D physical model and a magnetic circuit model for an integrated on-chip power inductor (OPI) to double the saturation current using permanent magnet (PM) material. A ~50 nH, 7.5 A spiral permanent magnet on-chip power inductor (PMOI) is here designed, and a 3D physical model is then developed and simulated using the ANSYS®/Maxwell® software package (version 2017.1). The 3D physical model simulation results agree with the presented magnetic circuit model, and show that in the example PMOI design, the addition of the PM increases the saturation current of the OPI from 4 A to 7.5 A, while the size and inductance value remain unchanged.

    ]]>
    Modeling and Simulation of a Planar Permanent Magnet On-Chip Power Inductor Jaber A. Abu Qahouq Mohammad K. Al-Smadi doi: 10.3390/modelling5010018 Modelling 2024-02-22 Modelling 2024-02-22 5 1
    Article
    339 10.3390/modelling5010018 https://www.mdpi.com/2673-3951/5/1/18
    Modelling, Vol. 5, Pages 315-338: Seismic Resilience of Emergency Departments: A Case Study https://www.mdpi.com/2673-3951/5/1/17 In this work, the seismic resilience of the Emergency Department of a hospital complex located in Tuscany (Italy), including its nonstructural components and organizational features, has been quantified. Special attention has been paid to the ceilings, whose potential damage stood out in past earthquakes. A comprehensive metamodel has been set, which can relate all the considered parameters to the assumed response quantity, i.e., the waiting time of the yellow-code patients arriving at the Emergency Department in the hours immediately after the seismic event. The seismic resilience of the Emergency Department has been measured for potential earthquakes compatible with the seismic hazard of the area. 2024-02-22 Modelling, Vol. 5, Pages 315-338: Seismic Resilience of Emergency Departments: A Case Study

    Modelling doi: 10.3390/modelling5010017

    Authors: Maria Pianigiani Stefania Viti Marco Tanganelli

    In this work, the seismic resilience of the Emergency Department of a hospital complex located in Tuscany (Italy), including its nonstructural components and organizational features, has been quantified. Special attention has been paid to the ceilings, whose potential damage stood out in past earthquakes. A comprehensive metamodel has been set, which can relate all the considered parameters to the assumed response quantity, i.e., the waiting time of the yellow-code patients arriving at the Emergency Department in the hours immediately after the seismic event. The seismic resilience of the Emergency Department has been measured for potential earthquakes compatible with the seismic hazard of the area.

    ]]>
    Seismic Resilience of Emergency Departments: A Case Study Maria Pianigiani Stefania Viti Marco Tanganelli doi: 10.3390/modelling5010017 Modelling 2024-02-22 Modelling 2024-02-22 5 1
    Article
    315 10.3390/modelling5010017 https://www.mdpi.com/2673-3951/5/1/17
    Modelling, Vol. 5, Pages 292-314: Intent Identification by Semantically Analyzing the Search Query https://www.mdpi.com/2673-3951/5/1/16 Understanding and analyzing the search intent of a user semantically based on their input query has emerged as an intriguing challenge in recent years. It suffers from small-scale human-labeled training data that produce a very poor hypothesis of rare words. The majority of data portals employ keyword-driven search functionality to explore content within their repositories. However, the keyword-based search cannot identify the users’ search intent accurately. Integrating a query-understandable framework into keyword search engines has the potential to enhance their performance, bridging the gap in interpreting the user’s search intent more effectively. In this study, we have proposed a novel approach that focuses on spatial and temporal information, phrase detection, and semantic similarity recognition to detect the user’s intent from the search query. We have used the n-gram probabilistic language model for phrase detection. Furthermore, we propose a probability-aware gated mechanism for RoBERTa (Robustly Optimized Bidirectional Encoder Representations from Transformers Approach) embeddings to semantically detect the user’s intent. We analyze and compare the performance of the proposed scheme with the existing state-of-the-art schemes. Furthermore, a detailed case study has been conducted to validate the model’s proficiency in semantic analysis, emphasizing its adaptability and potential for real-world applications where nuanced intent understanding is crucial. The experimental result demonstrates that our proposed system can significantly improve the accuracy for detecting the users’ search intent as well as the quality of classification during search. 2024-02-22 Modelling, Vol. 5, Pages 292-314: Intent Identification by Semantically Analyzing the Search Query

    Modelling doi: 10.3390/modelling5010016

    Authors: Tangina Sultana Ashis Kumar Mandal Hasi Saha Md. Nahid Sultan Md. Delowar Hossain

    Understanding and analyzing the search intent of a user semantically based on their input query has emerged as an intriguing challenge in recent years. It suffers from small-scale human-labeled training data that produce a very poor hypothesis of rare words. The majority of data portals employ keyword-driven search functionality to explore content within their repositories. However, the keyword-based search cannot identify the users’ search intent accurately. Integrating a query-understandable framework into keyword search engines has the potential to enhance their performance, bridging the gap in interpreting the user’s search intent more effectively. In this study, we have proposed a novel approach that focuses on spatial and temporal information, phrase detection, and semantic similarity recognition to detect the user’s intent from the search query. We have used the n-gram probabilistic language model for phrase detection. Furthermore, we propose a probability-aware gated mechanism for RoBERTa (Robustly Optimized Bidirectional Encoder Representations from Transformers Approach) embeddings to semantically detect the user’s intent. We analyze and compare the performance of the proposed scheme with the existing state-of-the-art schemes. Furthermore, a detailed case study has been conducted to validate the model’s proficiency in semantic analysis, emphasizing its adaptability and potential for real-world applications where nuanced intent understanding is crucial. The experimental result demonstrates that our proposed system can significantly improve the accuracy for detecting the users’ search intent as well as the quality of classification during search.

    ]]>
    Intent Identification by Semantically Analyzing the Search Query Tangina Sultana Ashis Kumar Mandal Hasi Saha Md. Nahid Sultan Md. Delowar Hossain doi: 10.3390/modelling5010016 Modelling 2024-02-22 Modelling 2024-02-22 5 1
    Article
    292 10.3390/modelling5010016 https://www.mdpi.com/2673-3951/5/1/16
    Modelling, Vol. 5, Pages 276-291: An Efficient Explicit Moving Particle Simulation Solver for Simulating Free Surface Flow on Multicore CPU/GPUs https://www.mdpi.com/2673-3951/5/1/15 The moving particle simulation (MPS) method is a simulation technique capable of calculating free surface and incompressible flows. As a particle-based method, MPS requires significant computational resources when simulating flow in a large-scale domain with a huge number of particles. Therefore, improving computational speed is a crucial aspect of current research in particle methods. In recent decades, many-core CPUs and GPUs have been widely utilized in scientific simulations to significantly enhance computational efficiency. However, the implementation of MPS on different types of hardware is not a trivial task. In this study, we present an implementation method for the explicit MPS that utilizes the Taichi parallel programming language. When it comes to CPU computing, Taichi’s computational efficiency is comparable to that of OpenMP. Nevertheless, when GPU computing is utilized, the acceleration of Taichi in parallel computing is not as fast as the CUDA implementation. Our developed explicit MPS solver demonstrates significant performance improvements in simulating dam-break flow dynamics. 2024-02-19 Modelling, Vol. 5, Pages 276-291: An Efficient Explicit Moving Particle Simulation Solver for Simulating Free Surface Flow on Multicore CPU/GPUs

    Modelling doi: 10.3390/modelling5010015

    Authors: Yu Zhao Fei Jiang Shinsuke Mochizuki

    The moving particle simulation (MPS) method is a simulation technique capable of calculating free surface and incompressible flows. As a particle-based method, MPS requires significant computational resources when simulating flow in a large-scale domain with a huge number of particles. Therefore, improving computational speed is a crucial aspect of current research in particle methods. In recent decades, many-core CPUs and GPUs have been widely utilized in scientific simulations to significantly enhance computational efficiency. However, the implementation of MPS on different types of hardware is not a trivial task. In this study, we present an implementation method for the explicit MPS that utilizes the Taichi parallel programming language. When it comes to CPU computing, Taichi’s computational efficiency is comparable to that of OpenMP. Nevertheless, when GPU computing is utilized, the acceleration of Taichi in parallel computing is not as fast as the CUDA implementation. Our developed explicit MPS solver demonstrates significant performance improvements in simulating dam-break flow dynamics.

    ]]>
    An Efficient Explicit Moving Particle Simulation Solver for Simulating Free Surface Flow on Multicore CPU/GPUs Yu Zhao Fei Jiang Shinsuke Mochizuki doi: 10.3390/modelling5010015 Modelling 2024-02-19 Modelling 2024-02-19 5 1
    Article
    276 10.3390/modelling5010015 https://www.mdpi.com/2673-3951/5/1/15
    Modelling, Vol. 5, Pages 265-275: Model for Hydrogen Production Scheduling Optimisation https://www.mdpi.com/2673-3951/5/1/14 This scientific article presents a developed model for optimising the scheduling of hydrogen production processes, addressing the growing demand for efficient and sustainable energy sources. The study focuses on the integration of advanced scheduling techniques to improve the overall performance of the hydrogen electrolyser. The proposed model leverages constraint programming and satisfiability (CP-SAT) techniques to systematically analyse complex production schedules, considering factors such as production unit capacities, resource availability and energy costs. By incorporating real-world constraints, such as fluctuating energy prices and the availability of renewable energy, the optimisation model aims to improve overall operational efficiency and reduce production costs. The CP-SAT was applied to achieve more efficient control of the electrolysis process. The optimisation of the scheduling task was set for a 24 h time period with time resolutions of 1 h and 15 min. The performance of the proposed CP-SAT model in this study was then compared with the Monte Carlo Tree Search (MCTS)-based model (developed in our previous work). The CP-SAT was proven to perform better but has several limitations. The model response to the input parameter change has been analysed. 2024-02-19 Modelling, Vol. 5, Pages 265-275: Model for Hydrogen Production Scheduling Optimisation

    Modelling doi: 10.3390/modelling5010014

    Authors: Vitalijs Komasilovs Aleksejs Zacepins Armands Kviesis Vladislavs Bezrukovs

    This scientific article presents a developed model for optimising the scheduling of hydrogen production processes, addressing the growing demand for efficient and sustainable energy sources. The study focuses on the integration of advanced scheduling techniques to improve the overall performance of the hydrogen electrolyser. The proposed model leverages constraint programming and satisfiability (CP-SAT) techniques to systematically analyse complex production schedules, considering factors such as production unit capacities, resource availability and energy costs. By incorporating real-world constraints, such as fluctuating energy prices and the availability of renewable energy, the optimisation model aims to improve overall operational efficiency and reduce production costs. The CP-SAT was applied to achieve more efficient control of the electrolysis process. The optimisation of the scheduling task was set for a 24 h time period with time resolutions of 1 h and 15 min. The performance of the proposed CP-SAT model in this study was then compared with the Monte Carlo Tree Search (MCTS)-based model (developed in our previous work). The CP-SAT was proven to perform better but has several limitations. The model response to the input parameter change has been analysed.

    ]]>
    Model for Hydrogen Production Scheduling Optimisation Vitalijs Komasilovs Aleksejs Zacepins Armands Kviesis Vladislavs Bezrukovs doi: 10.3390/modelling5010014 Modelling 2024-02-19 Modelling 2024-02-19 5 1
    Article
    265 10.3390/modelling5010014 https://www.mdpi.com/2673-3951/5/1/14
    Modelling, Vol. 5, Pages 238-264: Methodology for International Transport Corridor Macro-Modeling Using Petri Nets at the Early Stages of Corridor Development with Limited Input Data https://www.mdpi.com/2673-3951/5/1/13 International transport corridors (ITCs) are intricate logistical networks essential for global trade flows. The effective modeling of these corridors provides invaluable insights into optimizing the transport system. However, existing approaches have significant limitations in dynamically representing the complexities and uncertainties inherent in ITC operations and at the early stages of ITC development when data are limited. This gap is addressed through the application of Evaluation Petri Nets (E-Nets), which facilitate the detailed, flexible, and responsive macro-modeling of international transport corridors. This paper proposes a novel methodology for developing E-Net-based macro-models of corridors by incorporating key parameters like transportation time, costs, and logistics performance. The model is scalable, enabling analysis from an international perspective down to specific country segments. E-Nets overcome limitations of conventional transport models by capturing the interactive, stochastic nature of ITCs. The proposed modeling approach and scalability provide strategic insights into optimizing corridor efficiency. This research delivers a streamlined yet comprehensive methodology for ITC modeling using E-Nets. The presented framework has substantial potential for enhancing logistics system analysis and planning. 2024-02-17 Modelling, Vol. 5, Pages 238-264: Methodology for International Transport Corridor Macro-Modeling Using Petri Nets at the Early Stages of Corridor Development with Limited Input Data

    Modelling doi: 10.3390/modelling5010013

    Authors: Igor Kabashkin Zura Sansyzbayeva

    International transport corridors (ITCs) are intricate logistical networks essential for global trade flows. The effective modeling of these corridors provides invaluable insights into optimizing the transport system. However, existing approaches have significant limitations in dynamically representing the complexities and uncertainties inherent in ITC operations and at the early stages of ITC development when data are limited. This gap is addressed through the application of Evaluation Petri Nets (E-Nets), which facilitate the detailed, flexible, and responsive macro-modeling of international transport corridors. This paper proposes a novel methodology for developing E-Net-based macro-models of corridors by incorporating key parameters like transportation time, costs, and logistics performance. The model is scalable, enabling analysis from an international perspective down to specific country segments. E-Nets overcome limitations of conventional transport models by capturing the interactive, stochastic nature of ITCs. The proposed modeling approach and scalability provide strategic insights into optimizing corridor efficiency. This research delivers a streamlined yet comprehensive methodology for ITC modeling using E-Nets. The presented framework has substantial potential for enhancing logistics system analysis and planning.

    ]]>
    Methodology for International Transport Corridor Macro-Modeling Using Petri Nets at the Early Stages of Corridor Development with Limited Input Data Igor Kabashkin Zura Sansyzbayeva doi: 10.3390/modelling5010013 Modelling 2024-02-17 Modelling 2024-02-17 5 1
    Article
    238 10.3390/modelling5010013 https://www.mdpi.com/2673-3951/5/1/13
    Modelling, Vol. 5, Pages 223-237: Stress–Strength Reliability of the Type P(X < Y) for Birnbaum–Saunders Components: A General Result, Simulations and Real Data Set Applications https://www.mdpi.com/2673-3951/5/1/12 An exact expression for R=P(X<Y) has been obtained when X and Y are independent and follow Birnbaum–Saunders (BS) distributions. Using some special functions, it was possible to express R analytically with minimal parameter restrictions. Monte Carlo simulations and two applications considering real datasets were carried out to show the performance of the BS models in reliability scenarios. The new expressions are accurate and easy to use, making the results of interest to practitioners using BS models. 2024-02-15 Modelling, Vol. 5, Pages 223-237: Stress–Strength Reliability of the Type P(X < Y) for Birnbaum–Saunders Components: A General Result, Simulations and Real Data Set Applications

    Modelling doi: 10.3390/modelling5010012

    Authors: Felipe S. Quintino Luan Carlos de Sena Monteiro Ozelim Tiago A. da Fonseca Pushpa Narayan Rathie

    An exact expression for R=P(X<Y) has been obtained when X and Y are independent and follow Birnbaum–Saunders (BS) distributions. Using some special functions, it was possible to express R analytically with minimal parameter restrictions. Monte Carlo simulations and two applications considering real datasets were carried out to show the performance of the BS models in reliability scenarios. The new expressions are accurate and easy to use, making the results of interest to practitioners using BS models.

    ]]>
    Stress–Strength Reliability of the Type P(X < Y) for Birnbaum–Saunders Components: A General Result, Simulations and Real Data Set Applications Felipe S. Quintino Luan Carlos de Sena Monteiro Ozelim Tiago A. da Fonseca Pushpa Narayan Rathie doi: 10.3390/modelling5010012 Modelling 2024-02-15 Modelling 2024-02-15 5 1
    Article
    223 10.3390/modelling5010012 https://www.mdpi.com/2673-3951/5/1/12
    Modelling, Vol. 5, Pages 201-222: From Data to Draught: Modelling and Predicting Mixed-Culture Beer Fermentation Dynamics Using Autoregressive Recurrent Neural Networks https://www.mdpi.com/2673-3951/5/1/11 The ascendency of the craft beer movement within the brewing industry may be attributed to its commitment to unique flavours and innovative styles. Mixed-culture fermentation, celebrated for its novel organoleptic profiles, presents a modelling challenge due to its complex microbial dynamics. This study addresses the inherent complexity of modelling mixed-culture beer fermentation while acknowledging the condition monitoring limitations of craft breweries, namely sporadic offline sampling rates and limited available measurement parameters. A data-driven solution is proposed, utilising an Autoregressive Recurrent Neural Network (AR-RNN) to facilitate the production of novel, replicable, mixed-culture fermented beers. This research identifies time from pitch, specific gravity, pH, and fluid temperature as pivotal model parameters that are cost-effective for craft breweries to monitor offline. Notably, the autoregressive RNN fermentation model is generated using high-frequency multivariate data, a departure from intermittent offline measurements. Employing the trained autoregressive RNN framework, we demonstrate its robust forecasting prowess using limited offline input data, emphasising its ability to capture intricate fermentation dynamics. This data-driven approach offers significant advantages, showcasing the model’s accuracy across various fermentation configurations. Moreover, tailoring the design to the craft beer market’s unique demands significantly enhances the model’s practicable predictive capabilities. It empowers nuanced decision-making in real-world mixed-culture beer production. Furthermore, this model lays the groundwork for future studies, highlighting transformative possibilities for cost-effective model-based control systems in the craft beer sector. 2024-02-07 Modelling, Vol. 5, Pages 201-222: From Data to Draught: Modelling and Predicting Mixed-Culture Beer Fermentation Dynamics Using Autoregressive Recurrent Neural Networks

    Modelling doi: 10.3390/modelling5010011

    Authors: Alexander O’Brien Hongwei Zhang Daniel M. Allwood Andy Rawsthorne

    The ascendency of the craft beer movement within the brewing industry may be attributed to its commitment to unique flavours and innovative styles. Mixed-culture fermentation, celebrated for its novel organoleptic profiles, presents a modelling challenge due to its complex microbial dynamics. This study addresses the inherent complexity of modelling mixed-culture beer fermentation while acknowledging the condition monitoring limitations of craft breweries, namely sporadic offline sampling rates and limited available measurement parameters. A data-driven solution is proposed, utilising an Autoregressive Recurrent Neural Network (AR-RNN) to facilitate the production of novel, replicable, mixed-culture fermented beers. This research identifies time from pitch, specific gravity, pH, and fluid temperature as pivotal model parameters that are cost-effective for craft breweries to monitor offline. Notably, the autoregressive RNN fermentation model is generated using high-frequency multivariate data, a departure from intermittent offline measurements. Employing the trained autoregressive RNN framework, we demonstrate its robust forecasting prowess using limited offline input data, emphasising its ability to capture intricate fermentation dynamics. This data-driven approach offers significant advantages, showcasing the model’s accuracy across various fermentation configurations. Moreover, tailoring the design to the craft beer market’s unique demands significantly enhances the model’s practicable predictive capabilities. It empowers nuanced decision-making in real-world mixed-culture beer production. Furthermore, this model lays the groundwork for future studies, highlighting transformative possibilities for cost-effective model-based control systems in the craft beer sector.

    ]]>
    From Data to Draught: Modelling and Predicting Mixed-Culture Beer Fermentation Dynamics Using Autoregressive Recurrent Neural Networks Alexander O’Brien Hongwei Zhang Daniel M. Allwood Andy Rawsthorne doi: 10.3390/modelling5010011 Modelling 2024-02-07 Modelling 2024-02-07 5 1
    Article
    201 10.3390/modelling5010011 https://www.mdpi.com/2673-3951/5/1/11
    Modelling, Vol. 5, Pages 180-200: Assessing the Impact of Copula Selection on Reliability Measures of Type P(X < Y) with Generalized Extreme Value Marginals https://www.mdpi.com/2673-3951/5/1/10 In reliability studies, we are interested in the behaviour of a system when it interacts with its surrounding environment. To assess the system’s behaviour in a reliability sense, we can take the system’s intrinsic quality as strength and the outcome of interactions as stress. Failure is observed whenever stress exceeds strength. Taking Y as a random variable representing the stress the system experiences and random variable X as its strength, the probability of not failing can be taken as a proxy for the reliability of the component and given as P(Y<X)=1−P(X<Y). This way, in the present paper, it is considered that X and Y follow generalized extreme value distributions, which represent a family of continuous probability distributions that have been extensively applied in engineering and economic contexts. Our contribution deals with a more general scenario where stress and strength are not independent and copulas are used to model the dependence between the involved random variables. In such modelling framework, we explored the proper selection of copula models characterizing the dependence structure. The Gumbel–Hougaard, Frank, and Clayton copulas were used for modelling bivariate data sets. In each case, information criteria were considered to compare the modelling capabilities of each copula. Two economic applications, as well as an engineering one, on real data sets are discussed. Overall, an easy-to-use methodological framework is described, allowing practitioners to apply it to their own research projects. 2024-01-28 Modelling, Vol. 5, Pages 180-200: Assessing the Impact of Copula Selection on Reliability Measures of Type P(X < Y) with Generalized Extreme Value Marginals

    Modelling doi: 10.3390/modelling5010010

    Authors: Rebeca Klamerick Lima Felipe Sousa Quintino Tiago A. da Fonseca Luan Carlos de Sena Monteiro Ozelim Pushpa Narayan Rathie Helton Saulo

    In reliability studies, we are interested in the behaviour of a system when it interacts with its surrounding environment. To assess the system’s behaviour in a reliability sense, we can take the system’s intrinsic quality as strength and the outcome of interactions as stress. Failure is observed whenever stress exceeds strength. Taking Y as a random variable representing the stress the system experiences and random variable X as its strength, the probability of not failing can be taken as a proxy for the reliability of the component and given as P(Y<X)=1−P(X<Y). This way, in the present paper, it is considered that X and Y follow generalized extreme value distributions, which represent a family of continuous probability distributions that have been extensively applied in engineering and economic contexts. Our contribution deals with a more general scenario where stress and strength are not independent and copulas are used to model the dependence between the involved random variables. In such modelling framework, we explored the proper selection of copula models characterizing the dependence structure. The Gumbel–Hougaard, Frank, and Clayton copulas were used for modelling bivariate data sets. In each case, information criteria were considered to compare the modelling capabilities of each copula. Two economic applications, as well as an engineering one, on real data sets are discussed. Overall, an easy-to-use methodological framework is described, allowing practitioners to apply it to their own research projects.

    ]]>
    Assessing the Impact of Copula Selection on Reliability Measures of Type P(X < Y) with Generalized Extreme Value Marginals Rebeca Klamerick Lima Felipe Sousa Quintino Tiago A. da Fonseca Luan Carlos de Sena Monteiro Ozelim Pushpa Narayan Rathie Helton Saulo doi: 10.3390/modelling5010010 Modelling 2024-01-28 Modelling 2024-01-28 5 1
    Article
    180 10.3390/modelling5010010 https://www.mdpi.com/2673-3951/5/1/10
    Modelling, Vol. 5, Pages 153-179: Stepwise Regression for Increasing the Predictive Accuracy of Artificial Neural Networks: Applications in Benchmark and Advanced Problems https://www.mdpi.com/2673-3951/5/1/9 A new technique is proposed to increase the prediction accuracy of artificial neural networks (ANNs). This technique applies a stepwise regression (SR) procedure to the input data variables, which adds nonlinear terms into the input data in a way that maximizes the regression between the output and the input data. In this study, the SR procedure adds quadratic terms and products of the input variables on pairs. Afterwards, the ANN is trained based on the enhanced input data obtained by SR. After testing the proposed SR-ANN algorithm in four benchmark function approximation problems found in the literature, six examples of multivariate training data are considered, of two different sizes (big and small) often encountered in engineering applications and of three different distributions in which the diversity and correlation of the data are varied, and the testing performance of the ANN for varying sizes of its hidden layer is investigated. It is shown that the proposed SR-ANN algorithm can reduce the prediction error by a factor of up to 26 and increase the regression coefficient between predicted and actual data in all cases compared to ANNs trained with ordinary algorithms. 2024-01-12 Modelling, Vol. 5, Pages 153-179: Stepwise Regression for Increasing the Predictive Accuracy of Artificial Neural Networks: Applications in Benchmark and Advanced Problems

    Modelling doi: 10.3390/modelling5010009

    Authors: George Papazafeiropoulos

    A new technique is proposed to increase the prediction accuracy of artificial neural networks (ANNs). This technique applies a stepwise regression (SR) procedure to the input data variables, which adds nonlinear terms into the input data in a way that maximizes the regression between the output and the input data. In this study, the SR procedure adds quadratic terms and products of the input variables on pairs. Afterwards, the ANN is trained based on the enhanced input data obtained by SR. After testing the proposed SR-ANN algorithm in four benchmark function approximation problems found in the literature, six examples of multivariate training data are considered, of two different sizes (big and small) often encountered in engineering applications and of three different distributions in which the diversity and correlation of the data are varied, and the testing performance of the ANN for varying sizes of its hidden layer is investigated. It is shown that the proposed SR-ANN algorithm can reduce the prediction error by a factor of up to 26 and increase the regression coefficient between predicted and actual data in all cases compared to ANNs trained with ordinary algorithms.

    ]]>
    Stepwise Regression for Increasing the Predictive Accuracy of Artificial Neural Networks: Applications in Benchmark and Advanced Problems George Papazafeiropoulos doi: 10.3390/modelling5010009 Modelling 2024-01-12 Modelling 2024-01-12 5 1
    Article
    153 10.3390/modelling5010009 https://www.mdpi.com/2673-3951/5/1/9
    Modelling, Vol. 5, Pages 117-152: Controller Design for Air Conditioner of a Vehicle with Three Control Inputs Using Model Predictive Control https://www.mdpi.com/2673-3951/5/1/8 Fuel consumption optimization is a critical field of research within the automotive industry to meet consumer expectations and regulatory requirements. A reduction in fuel consumption can be achieved by reducing the energy consumed by the vehicle. Several subsystems contribute to the overall energy consumption of the vehicle, including the air conditioning (A/C) system. The loads within the A/C system are mainly contributed by the compressor, condenser fan, and underhood aerodynamic drag, which are the components targeted for overall vehicle energy use reduction in this paper. This paper explores a new avenue for A/C system control by considering the power consumption due to vehicle drag (regulated by the condenser fan and active grille shutters (AGS)) to reduce the energy consumption of the A/C system and improve the overall vehicle fuel economy. The control approach used in this paper is model predictive control (MPC). The controller is designed in Simulink, where the compressor clutch signal, condenser fan speed, and AGS open-fraction are inputs. The controller is connected to a high-fidelity vehicle model in Gamma Technologies (GT)-Suite (which is treated as the real physical vehicle) to form a software-in-the-loop simulation environment, where the controller sends actuator inputs to GT-Suite and the vehicle response is sent back to the controller in Simulink. Quadratic programming is used to solve the MPC optimization problem and determine the optimal input trajectory at each time step. The results indicate that using MPC to control the compressor clutch, condenser fan, and AGS can provide a 37.6% reduction in the overall A/C system energy consumption and a 32.7% reduction in the error for the air temperature reference tracking compared to the conventional baseline proportional integral derivative control present in the GT-Suite model. 2024-01-03 Modelling, Vol. 5, Pages 117-152: Controller Design for Air Conditioner of a Vehicle with Three Control Inputs Using Model Predictive Control

    Modelling doi: 10.3390/modelling5010008

    Authors: Trevor Parent Jeffrey J. Defoe Afshin Rahimi

    Fuel consumption optimization is a critical field of research within the automotive industry to meet consumer expectations and regulatory requirements. A reduction in fuel consumption can be achieved by reducing the energy consumed by the vehicle. Several subsystems contribute to the overall energy consumption of the vehicle, including the air conditioning (A/C) system. The loads within the A/C system are mainly contributed by the compressor, condenser fan, and underhood aerodynamic drag, which are the components targeted for overall vehicle energy use reduction in this paper. This paper explores a new avenue for A/C system control by considering the power consumption due to vehicle drag (regulated by the condenser fan and active grille shutters (AGS)) to reduce the energy consumption of the A/C system and improve the overall vehicle fuel economy. The control approach used in this paper is model predictive control (MPC). The controller is designed in Simulink, where the compressor clutch signal, condenser fan speed, and AGS open-fraction are inputs. The controller is connected to a high-fidelity vehicle model in Gamma Technologies (GT)-Suite (which is treated as the real physical vehicle) to form a software-in-the-loop simulation environment, where the controller sends actuator inputs to GT-Suite and the vehicle response is sent back to the controller in Simulink. Quadratic programming is used to solve the MPC optimization problem and determine the optimal input trajectory at each time step. The results indicate that using MPC to control the compressor clutch, condenser fan, and AGS can provide a 37.6% reduction in the overall A/C system energy consumption and a 32.7% reduction in the error for the air temperature reference tracking compared to the conventional baseline proportional integral derivative control present in the GT-Suite model.

    ]]>
    Controller Design for Air Conditioner of a Vehicle with Three Control Inputs Using Model Predictive Control Trevor Parent Jeffrey J. Defoe Afshin Rahimi doi: 10.3390/modelling5010008 Modelling 2024-01-03 Modelling 2024-01-03 5 1
    Article
    117 10.3390/modelling5010008 https://www.mdpi.com/2673-3951/5/1/8
    Modelling, Vol. 5, Pages 99-116: Modeling the Market-Driven Composition of the Passenger Vehicle Market during the Transition to Electric Vehicles https://www.mdpi.com/2673-3951/5/1/7 The automotive market is currently shifting away from traditional vehicles reliant on internal combustion engines, favoring battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), and plug-in hybrid electric vehicles (PHEVs). The widespread acceptance of these vehicles, especially without government subsidies, hinges on market dynamics, particularly customers opting for vehicles with the lowest overall cost of ownership. This paper aims to model the total cost of ownership for various powertrains, encompassing conventional vehicles, HEVs, PHEVs, and BEVs, focusing on both sedans and sports utility vehicles. The modeling uses vehicle dynamics to approximate the fuel and electricity consumption rates for each powertrain. Following this, the analysis estimates the purchase cost and the lifetime operational cost for each vehicle type, factoring in average daily mileage. As drivers consider vehicle replacements, their choice tends to lean towards the most economical option, especially when performance metrics (e.g., range, acceleration, and payload) are comparable across the choices. The analysis seeks to determine the percentage of drivers likely to choose each vehicle type based on their specific driving habits. Advances in battery technology will reduce the battery weight and cost; further, the cost of electricity will decrease as more renewable energy sources will be integrated into the grid. In turn, the total cost of ownership will decrease for the electrified vehicles. By following battery trends, this study is able to model the makeup of the automotive market over time as it transitions from fossil-fuel based vehicles to fully electric vehicles. The model finds until the cost of batteries and electricity is significantly reduced, the composition of the vehicle market is a mixture of all vehicle types. 2023-12-27 Modelling, Vol. 5, Pages 99-116: Modeling the Market-Driven Composition of the Passenger Vehicle Market during the Transition to Electric Vehicles

    Modelling doi: 10.3390/modelling5010007

    Authors: Vikram Mittal Rajesh Shah

    The automotive market is currently shifting away from traditional vehicles reliant on internal combustion engines, favoring battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), and plug-in hybrid electric vehicles (PHEVs). The widespread acceptance of these vehicles, especially without government subsidies, hinges on market dynamics, particularly customers opting for vehicles with the lowest overall cost of ownership. This paper aims to model the total cost of ownership for various powertrains, encompassing conventional vehicles, HEVs, PHEVs, and BEVs, focusing on both sedans and sports utility vehicles. The modeling uses vehicle dynamics to approximate the fuel and electricity consumption rates for each powertrain. Following this, the analysis estimates the purchase cost and the lifetime operational cost for each vehicle type, factoring in average daily mileage. As drivers consider vehicle replacements, their choice tends to lean towards the most economical option, especially when performance metrics (e.g., range, acceleration, and payload) are comparable across the choices. The analysis seeks to determine the percentage of drivers likely to choose each vehicle type based on their specific driving habits. Advances in battery technology will reduce the battery weight and cost; further, the cost of electricity will decrease as more renewable energy sources will be integrated into the grid. In turn, the total cost of ownership will decrease for the electrified vehicles. By following battery trends, this study is able to model the makeup of the automotive market over time as it transitions from fossil-fuel based vehicles to fully electric vehicles. The model finds until the cost of batteries and electricity is significantly reduced, the composition of the vehicle market is a mixture of all vehicle types.

    ]]>
    Modeling the Market-Driven Composition of the Passenger Vehicle Market during the Transition to Electric Vehicles Vikram Mittal Rajesh Shah doi: 10.3390/modelling5010007 Modelling 2023-12-27 Modelling 2023-12-27 5 1
    Article
    99 10.3390/modelling5010007 https://www.mdpi.com/2673-3951/5/1/7
    Modelling, Vol. 5, Pages 85-98: DIAG Approach: Introducing the Cognitive Process Mining by an Ontology-Driven Approach to Diagnose and Explain Concept Drifts https://www.mdpi.com/2673-3951/5/1/6 The remarkable growth of process mining applications in care pathway monitoring is undeniable. One of the sub-emerging case studies is the use of patients’ location data in process mining analyses. While the streamlining of published works is focused on introducing process discovery algorithms, there is a necessity to address challenges beyond that. Literature analysis indicates that explainability, reasoning, and characterizing the root causes of process drifts in healthcare processes constitute an important but overlooked challenge. In addition, incorporating domain-specific knowledge into process discovery could be a significant contribution to process mining literature. Therefore, we mitigate the issue by introducing cognitive process mining through the DIAG approach, which consists of a meta-model and an algorithm. This approach enables reasoning and diagnosing in process mining through an ontology-driven framework. With DIAG, we modeled the healthcare semantics in a process mining application and diagnosed the causes of drifts in patients’ pathways. We performed an experiment in a hospital living lab to examine the effectiveness of our approach. 2023-12-27 Modelling, Vol. 5, Pages 85-98: DIAG Approach: Introducing the Cognitive Process Mining by an Ontology-Driven Approach to Diagnose and Explain Concept Drifts

    Modelling doi: 10.3390/modelling5010006

    Authors: Sina Namaki Araghi Franck Fontanili Arkopaul Sarkar Elyes Lamine Mohamed-Hedi Karray Frederick Benaben

    The remarkable growth of process mining applications in care pathway monitoring is undeniable. One of the sub-emerging case studies is the use of patients’ location data in process mining analyses. While the streamlining of published works is focused on introducing process discovery algorithms, there is a necessity to address challenges beyond that. Literature analysis indicates that explainability, reasoning, and characterizing the root causes of process drifts in healthcare processes constitute an important but overlooked challenge. In addition, incorporating domain-specific knowledge into process discovery could be a significant contribution to process mining literature. Therefore, we mitigate the issue by introducing cognitive process mining through the DIAG approach, which consists of a meta-model and an algorithm. This approach enables reasoning and diagnosing in process mining through an ontology-driven framework. With DIAG, we modeled the healthcare semantics in a process mining application and diagnosed the causes of drifts in patients’ pathways. We performed an experiment in a hospital living lab to examine the effectiveness of our approach.

    ]]>
    DIAG Approach: Introducing the Cognitive Process Mining by an Ontology-Driven Approach to Diagnose and Explain Concept Drifts Sina Namaki Araghi Franck Fontanili Arkopaul Sarkar Elyes Lamine Mohamed-Hedi Karray Frederick Benaben doi: 10.3390/modelling5010006 Modelling 2023-12-27 Modelling 2023-12-27 5 1
    Article
    85 10.3390/modelling5010006 https://www.mdpi.com/2673-3951/5/1/6
    Modelling, Vol. 5, Pages 71-84: Shell-Based Finite Element Modeling of Herøysund Bridge in Norway https://www.mdpi.com/2673-3951/5/1/5 This paper thoroughly examines the application of the Finite Element Method (FEM) to the numerical modal analysis of Herøysund Bridge, focusing on the theoretical backdrop, the construction process, and FEM techniques. This work examines the specific applied FEM approaches and their advantages and disadvantages. This Herøysund Bridge analysis employs a two-pronged strategy consisting of a 3D–solid model and a shell model. To forecast the physical behavior of a structure, assumptions, modeling methodologies, and the incorporation of specific components such as pillars are applied to both approaches. This research also emphasizes the importance of boundary conditions, examining the structural effects of standard Earth gravity, a post-tensioned load, and a railing and asphalt load. The Results section thoroughly explores the mode shapes and frequencies of the 3D–solid and shell models. The conclusion of this work includes findings obtained from the study, implications for Herøysund Bridge, and a comparison of both modeling strategies. It also incorporates ideas for future research and guides employing FEM 3D–solid and shell methods to design and construct more efficient, resilient, and durable bridge structures. 2023-12-23 Modelling, Vol. 5, Pages 71-84: Shell-Based Finite Element Modeling of Herøysund Bridge in Norway

    Modelling doi: 10.3390/modelling5010005

    Authors: Harpal Singh Zeeshan Azad Vanni Nicoletti

    This paper thoroughly examines the application of the Finite Element Method (FEM) to the numerical modal analysis of Herøysund Bridge, focusing on the theoretical backdrop, the construction process, and FEM techniques. This work examines the specific applied FEM approaches and their advantages and disadvantages. This Herøysund Bridge analysis employs a two-pronged strategy consisting of a 3D–solid model and a shell model. To forecast the physical behavior of a structure, assumptions, modeling methodologies, and the incorporation of specific components such as pillars are applied to both approaches. This research also emphasizes the importance of boundary conditions, examining the structural effects of standard Earth gravity, a post-tensioned load, and a railing and asphalt load. The Results section thoroughly explores the mode shapes and frequencies of the 3D–solid and shell models. The conclusion of this work includes findings obtained from the study, implications for Herøysund Bridge, and a comparison of both modeling strategies. It also incorporates ideas for future research and guides employing FEM 3D–solid and shell methods to design and construct more efficient, resilient, and durable bridge structures.

    ]]>
    Shell-Based Finite Element Modeling of Herøysund Bridge in Norway Harpal Singh Zeeshan Azad Vanni Nicoletti doi: 10.3390/modelling5010005 Modelling 2023-12-23 Modelling 2023-12-23 5 1
    Article
    71 10.3390/modelling5010005 https://www.mdpi.com/2673-3951/5/1/5
    Modelling, Vol. 5, Pages 55-70: Life-Cycle Assessment of an Office Building: Influence of the Structural Design on the Embodied Carbon Emissions https://www.mdpi.com/2673-3951/5/1/4 In 2020, 37% of global CO2eq. emissions were attributed to the construction sector. The major effort to reduce this share of emissions has been focused on reducing the operational carbon of buildings. Recently, awareness has also been raised on the role of embodied carbon: emissions from materials and construction processes must be urgently addressed to ensure sustainable buildings. To assess the embodied carbon of a building, a life-cycle assessment (LCA) can be performed; this is a science-based and standardized methodology for quantifying the environmental impacts of a building during its life. This paper presents the comparative results of a “cradle-to-cradle” building LCA of an office building located in Luxembourg with 50 years of service life. Three equivalent structural systems are compared: a steel–concrete composite frame, a prefabricated reinforced concrete frame, and a timber frame. A life-cycle inventory (LCI) was performed using environmental product declarations (EPDs) according to EN 15804. For the considered office building, the steel–concrete composite solution outperforms the prefabricated concrete frame in terms of global warming potential (GWP). Additionally, it provides a lower GWP than the timber-frame solution when a landfill end-of-life (EOL) scenario for wood is considered. Finally, the steel–concrete composite and timber solutions show equivalent GWPs when the wood EOL is assumed to be 100% incinerated with energy recovery. 2023-12-22 Modelling, Vol. 5, Pages 55-70: Life-Cycle Assessment of an Office Building: Influence of the Structural Design on the Embodied Carbon Emissions

    Modelling doi: 10.3390/modelling5010004

    Authors: José Humberto Matias de Paula Filho Marina D’Antimo Marion Charlier Olivier Vassart

    In 2020, 37% of global CO2eq. emissions were attributed to the construction sector. The major effort to reduce this share of emissions has been focused on reducing the operational carbon of buildings. Recently, awareness has also been raised on the role of embodied carbon: emissions from materials and construction processes must be urgently addressed to ensure sustainable buildings. To assess the embodied carbon of a building, a life-cycle assessment (LCA) can be performed; this is a science-based and standardized methodology for quantifying the environmental impacts of a building during its life. This paper presents the comparative results of a “cradle-to-cradle” building LCA of an office building located in Luxembourg with 50 years of service life. Three equivalent structural systems are compared: a steel–concrete composite frame, a prefabricated reinforced concrete frame, and a timber frame. A life-cycle inventory (LCI) was performed using environmental product declarations (EPDs) according to EN 15804. For the considered office building, the steel–concrete composite solution outperforms the prefabricated concrete frame in terms of global warming potential (GWP). Additionally, it provides a lower GWP than the timber-frame solution when a landfill end-of-life (EOL) scenario for wood is considered. Finally, the steel–concrete composite and timber solutions show equivalent GWPs when the wood EOL is assumed to be 100% incinerated with energy recovery.

    ]]>
    Life-Cycle Assessment of an Office Building: Influence of the Structural Design on the Embodied Carbon Emissions José Humberto Matias de Paula Filho Marina D’Antimo Marion Charlier Olivier Vassart doi: 10.3390/modelling5010004 Modelling 2023-12-22 Modelling 2023-12-22 5 1
    Article
    55 10.3390/modelling5010004 https://www.mdpi.com/2673-3951/5/1/4
    Modelling, Vol. 5, Pages 37-54: Finite Element In-Depth Verification: Base Displacements of a Spherical Dome Loaded by Edge Forces and Moments https://www.mdpi.com/2673-3951/5/1/3 Nowadays, engineers possess a wealth of numerical packages in order to design civil engineering structures. The finite element method offers a variety of sophisticated element types, nonlinear materials, and solution algorithms, which enable engineers to confront complicated design problems. However, one of the difficult tasks is the verification of the produced numerical results. The present paper deals with the in-depth verification of a basic problem, referring to the axisymmetric loading by edge forces/moments of a spherical dome, truncated at various roll-down angles, φo. Two formulations of analytical solutions are derived by the bibliography; their results are compared with those produced by the implementation of the finite element method. Modelling details, such as the finite element type, orientation of joints, application of loading, boundary conditions, and results’ interpretation, are presented thoroughly. Four different ratios of the radius of curvature, r and shell’s thickness, and t are examined in order to investigate the compatibility between the implementation of the finite element method to the “first-order” shell theory. The discussion refers to the differences not only between the numerical and analytical results, but also between the two analytical approaches. Furthermore, it emphasizes the necessity of contacting even linear elastic preliminary verification numerical tests as a basis for the construction of more elaborated and sophisticated models. 2023-12-21 Modelling, Vol. 5, Pages 37-54: Finite Element In-Depth Verification: Base Displacements of a Spherical Dome Loaded by Edge Forces and Moments

    Modelling doi: 10.3390/modelling5010003

    Authors: Vasiliki G. Terzi Triantafyllos K. Makarios

    Nowadays, engineers possess a wealth of numerical packages in order to design civil engineering structures. The finite element method offers a variety of sophisticated element types, nonlinear materials, and solution algorithms, which enable engineers to confront complicated design problems. However, one of the difficult tasks is the verification of the produced numerical results. The present paper deals with the in-depth verification of a basic problem, referring to the axisymmetric loading by edge forces/moments of a spherical dome, truncated at various roll-down angles, φo. Two formulations of analytical solutions are derived by the bibliography; their results are compared with those produced by the implementation of the finite element method. Modelling details, such as the finite element type, orientation of joints, application of loading, boundary conditions, and results’ interpretation, are presented thoroughly. Four different ratios of the radius of curvature, r and shell’s thickness, and t are examined in order to investigate the compatibility between the implementation of the finite element method to the “first-order” shell theory. The discussion refers to the differences not only between the numerical and analytical results, but also between the two analytical approaches. Furthermore, it emphasizes the necessity of contacting even linear elastic preliminary verification numerical tests as a basis for the construction of more elaborated and sophisticated models.

    ]]>
    Finite Element In-Depth Verification: Base Displacements of a Spherical Dome Loaded by Edge Forces and Moments Vasiliki G. Terzi Triantafyllos K. Makarios doi: 10.3390/modelling5010003 Modelling 2023-12-21 Modelling 2023-12-21 5 1
    Article
    37 10.3390/modelling5010003 https://www.mdpi.com/2673-3951/5/1/3
    Modelling, Vol. 5, Pages 16-36: Optimal Multi-Sensor Obstacle Detection System for Small Fixed-Wing UAVs https://www.mdpi.com/2673-3951/5/1/2 The safety enhancement of small fixed-wing UAVs regarding obstacle detection is addressed using optimization techniques to find the best sensor orientations of different multi-sensor configurations. Four types of sensors for obstacle detection are modeled, namely an ultrasonic sensor, laser rangefinder, LIDAR, and RADAR, using specifications from commercially available models. The simulation environment developed includes collision avoidance with the Potential Fields method. An optimization study is conducted using a genetic algorithm that identifies the best sensor sets and respective orientations relative to the UAV longitudinal axis for the highest obstacle avoidance success rate. The UAV performance is found to be critical for the solutions found, and its speed is considered in the range of 5–15 m/s with a turning rate limited to 45°/s. Forty collision scenarios with both stationary and moving obstacles are randomly generated. Among the combinations of the sensors studied, 12 sensor sets are presented. The ultrasonic sensors prove to be inadequate due to their very limited range, while the laser rangefinders benefit from extended range but have a narrow field of view. In contrast, LIDAR and RADAR emerge as promising options with significant ranges and wide field of views. The best configurations involve a front-facing LIDAR complemented with two laser rangefinders oriented at ±10° or two RADARs oriented at ±28°. 2023-12-20 Modelling, Vol. 5, Pages 16-36: Optimal Multi-Sensor Obstacle Detection System for Small Fixed-Wing UAVs

    Modelling doi: 10.3390/modelling5010002

    Authors: Marta Portugal André C. Marta

    The safety enhancement of small fixed-wing UAVs regarding obstacle detection is addressed using optimization techniques to find the best sensor orientations of different multi-sensor configurations. Four types of sensors for obstacle detection are modeled, namely an ultrasonic sensor, laser rangefinder, LIDAR, and RADAR, using specifications from commercially available models. The simulation environment developed includes collision avoidance with the Potential Fields method. An optimization study is conducted using a genetic algorithm that identifies the best sensor sets and respective orientations relative to the UAV longitudinal axis for the highest obstacle avoidance success rate. The UAV performance is found to be critical for the solutions found, and its speed is considered in the range of 5–15 m/s with a turning rate limited to 45°/s. Forty collision scenarios with both stationary and moving obstacles are randomly generated. Among the combinations of the sensors studied, 12 sensor sets are presented. The ultrasonic sensors prove to be inadequate due to their very limited range, while the laser rangefinders benefit from extended range but have a narrow field of view. In contrast, LIDAR and RADAR emerge as promising options with significant ranges and wide field of views. The best configurations involve a front-facing LIDAR complemented with two laser rangefinders oriented at ±10° or two RADARs oriented at ±28°.

    ]]>
    Optimal Multi-Sensor Obstacle Detection System for Small Fixed-Wing UAVs Marta Portugal André C. Marta doi: 10.3390/modelling5010002 Modelling 2023-12-20 Modelling 2023-12-20 5 1
    Article
    16 10.3390/modelling5010002 https://www.mdpi.com/2673-3951/5/1/2
    Modelling, Vol. 5, Pages 1-15: Machine Learning-Assisted Characterization of Pore-Induced Variability in Mechanical Response of Additively Manufactured Components https://www.mdpi.com/2673-3951/5/1/1 Manufacturing defects, such as porosity and inclusions, can significantly compromise the structural integrity and performance of additively manufactured parts by acting as stress concentrators and potential initiation sites for failure. This paper investigates the effects of pore system morphology (number of pores, total volume, volume fraction, and standard deviation of size of pores) on the material response of additively manufactured Ti6Al4V specimens under a shear–compression stress state. An automatic approach for finite element simulations, using the J2 plasticity model, was utilized on a shear–compression specimen with artificial pores of varying characteristics to generate the dataset. An artificial neural network (ANN) surrogate model was developed to predict peak force and failure displacement of specimens with different pore attributes. The ANN demonstrated effective prediction capabilities, offering insights into the importance of individual input variables on mechanical performance of additively manufactured parts. Additionally, a sensitivity analysis using the Garson equation was performed to identify the most influential parameters affecting the material’s behaviour. It was observed that materials with more uniform pore sizes exhibit better mechanical properties than those with a wider size distribution. Overall, the study contributes to a better understanding of the interplay between pore characteristics and material response, providing better defect-aware design and property–porosity linkage in additive manufacturing processes. 2023-12-19 Modelling, Vol. 5, Pages 1-15: Machine Learning-Assisted Characterization of Pore-Induced Variability in Mechanical Response of Additively Manufactured Components

    Modelling doi: 10.3390/modelling5010001

    Authors: Mohammad Rezasefat James D. Hogan

    Manufacturing defects, such as porosity and inclusions, can significantly compromise the structural integrity and performance of additively manufactured parts by acting as stress concentrators and potential initiation sites for failure. This paper investigates the effects of pore system morphology (number of pores, total volume, volume fraction, and standard deviation of size of pores) on the material response of additively manufactured Ti6Al4V specimens under a shear–compression stress state. An automatic approach for finite element simulations, using the J2 plasticity model, was utilized on a shear–compression specimen with artificial pores of varying characteristics to generate the dataset. An artificial neural network (ANN) surrogate model was developed to predict peak force and failure displacement of specimens with different pore attributes. The ANN demonstrated effective prediction capabilities, offering insights into the importance of individual input variables on mechanical performance of additively manufactured parts. Additionally, a sensitivity analysis using the Garson equation was performed to identify the most influential parameters affecting the material’s behaviour. It was observed that materials with more uniform pore sizes exhibit better mechanical properties than those with a wider size distribution. Overall, the study contributes to a better understanding of the interplay between pore characteristics and material response, providing better defect-aware design and property–porosity linkage in additive manufacturing processes.

    ]]>
    Machine Learning-Assisted Characterization of Pore-Induced Variability in Mechanical Response of Additively Manufactured Components Mohammad Rezasefat James D. Hogan doi: 10.3390/modelling5010001 Modelling 2023-12-19 Modelling 2023-12-19 5 1
    Article
    1 10.3390/modelling5010001 https://www.mdpi.com/2673-3951/5/1/1
    Modelling, Vol. 4, Pages 650-665: Modelling the Acoustic Propagation in a Test Section of a Cavitation Tunnel: Scattering Issues of the Acoustic Source https://www.mdpi.com/2673-3951/4/4/37 The prediction of the underwater-radiated noise for a vessel is classically performed at a model scale and extrapolated by semi-empirical laws. The accuracy of such a method depends on many parameters. Among them, the acoustic propagation model used to estimate the noise measured at a model scale is important. The present study focuses on the impact of the presence of a source in the transverse plane. The scattering effect, often neglected in many studies, is here investigated. Applying different methods for computation, we perform several simulations of the acoustic pressure field to show the influence of the scattered field. We finally discuss the results and draw some conclusions about the scattering effect in our experimental configuration. 2023-12-08 Modelling, Vol. 4, Pages 650-665: Modelling the Acoustic Propagation in a Test Section of a Cavitation Tunnel: Scattering Issues of the Acoustic Source

    Modelling doi: 10.3390/modelling4040037

    Authors: Romuald Boucheron

    The prediction of the underwater-radiated noise for a vessel is classically performed at a model scale and extrapolated by semi-empirical laws. The accuracy of such a method depends on many parameters. Among them, the acoustic propagation model used to estimate the noise measured at a model scale is important. The present study focuses on the impact of the presence of a source in the transverse plane. The scattering effect, often neglected in many studies, is here investigated. Applying different methods for computation, we perform several simulations of the acoustic pressure field to show the influence of the scattered field. We finally discuss the results and draw some conclusions about the scattering effect in our experimental configuration.

    ]]>
    Modelling the Acoustic Propagation in a Test Section of a Cavitation Tunnel: Scattering Issues of the Acoustic Source Romuald Boucheron doi: 10.3390/modelling4040037 Modelling 2023-12-08 Modelling 2023-12-08 4 4
    Article
    650 10.3390/modelling4040037 https://www.mdpi.com/2673-3951/4/4/37
    Modelling, Vol. 4, Pages 628-649: Finite Element Modeling and Analysis of Perforated Steel Members under Blast Loading https://www.mdpi.com/2673-3951/4/4/36 Perforated steel members (PSMs) are now frequently used in building construction due to their beneficial features, including their proven blast-resistance abilities. To safeguard against structural failures from explosions and terrorist threats, perforated steel beams (PSBs) and perforated steel columns (PSCs) offer a viable alternative to traditional steel members. This is attributed to their impressive energy absorption potential, a result of their combined high strength and ductile behavior. In this study, numerical examinations of damage assessment under the combined effects of gravity and blast loads are carried out to mimic real-world scenarios of external explosions close to steel structures. The damage assessment for PSBs and PSCs considers not just the initial deformation from the blast, but also takes into account the residual capacities to formulate dependable damage metrics post-explosion. Comprehensive explicit finite element (FE) analyses are performed with the LSDYNA software. The FE model, when compared against test results, aligns well across all resistance phases, from bending and softening to tension membrane regions. This validated numerical model offers a viable alternative to laboratory experiments for predicting the dynamic resistance of PSBs and PSCs. Moreover, it is advisable to use fully integrated solid elements, featuring eight integration points on the element surface, in the FE models for accurate predictions of PSBs’ and PSCs’ behavior under blast loading. A parametric study is presented to investigate the effect of web-opening shapes, retrofitting, and different blast scenarios. The results obtained from the analytical FE approaches are used to obtain the ductile responses of PSMs, and are considered an important key in comparisons among the studied cases. 2023-12-01 Modelling, Vol. 4, Pages 628-649: Finite Element Modeling and Analysis of Perforated Steel Members under Blast Loading

    Modelling doi: 10.3390/modelling4040036

    Authors: Mahmoud T. Nawar Ayman El-Zohairy Ibrahim T. Arafa

    Perforated steel members (PSMs) are now frequently used in building construction due to their beneficial features, including their proven blast-resistance abilities. To safeguard against structural failures from explosions and terrorist threats, perforated steel beams (PSBs) and perforated steel columns (PSCs) offer a viable alternative to traditional steel members. This is attributed to their impressive energy absorption potential, a result of their combined high strength and ductile behavior. In this study, numerical examinations of damage assessment under the combined effects of gravity and blast loads are carried out to mimic real-world scenarios of external explosions close to steel structures. The damage assessment for PSBs and PSCs considers not just the initial deformation from the blast, but also takes into account the residual capacities to formulate dependable damage metrics post-explosion. Comprehensive explicit finite element (FE) analyses are performed with the LSDYNA software. The FE model, when compared against test results, aligns well across all resistance phases, from bending and softening to tension membrane regions. This validated numerical model offers a viable alternative to laboratory experiments for predicting the dynamic resistance of PSBs and PSCs. Moreover, it is advisable to use fully integrated solid elements, featuring eight integration points on the element surface, in the FE models for accurate predictions of PSBs’ and PSCs’ behavior under blast loading. A parametric study is presented to investigate the effect of web-opening shapes, retrofitting, and different blast scenarios. The results obtained from the analytical FE approaches are used to obtain the ductile responses of PSMs, and are considered an important key in comparisons among the studied cases.

    ]]>
    Finite Element Modeling and Analysis of Perforated Steel Members under Blast Loading Mahmoud T. Nawar Ayman El-Zohairy Ibrahim T. Arafa doi: 10.3390/modelling4040036 Modelling 2023-12-01 Modelling 2023-12-01 4 4
    Article
    628 10.3390/modelling4040036 https://www.mdpi.com/2673-3951/4/4/36
    Modelling, Vol. 4, Pages 611-627: Generalized Fiducial Inference for the Generalized Rayleigh Distribution https://www.mdpi.com/2673-3951/4/4/35 This article focuses on the interval estimation of the generalized Rayleigh distribution with scale and shape parameters. The generalized fiducial method is used to construct the fiducial point estimators as well as the fiducial confidence intervals, and then their performance is compared with other methods such as the maximum likelihood estimation, Bayesian estimation and parametric bootstrap method. Monte Carlo simulation studies are carried out to examine the efficiency of the methods in terms of the mean square error, coverage probability and average length. Finally, two real data sets are presented to demonstrate the applicability of the proposed method. 2023-11-17 Modelling, Vol. 4, Pages 611-627: Generalized Fiducial Inference for the Generalized Rayleigh Distribution

    Modelling doi: 10.3390/modelling4040035

    Authors: Xuan Zhu Weizhong Tian Chengliang Tian

    This article focuses on the interval estimation of the generalized Rayleigh distribution with scale and shape parameters. The generalized fiducial method is used to construct the fiducial point estimators as well as the fiducial confidence intervals, and then their performance is compared with other methods such as the maximum likelihood estimation, Bayesian estimation and parametric bootstrap method. Monte Carlo simulation studies are carried out to examine the efficiency of the methods in terms of the mean square error, coverage probability and average length. Finally, two real data sets are presented to demonstrate the applicability of the proposed method.

    ]]>
    Generalized Fiducial Inference for the Generalized Rayleigh Distribution Xuan Zhu Weizhong Tian Chengliang Tian doi: 10.3390/modelling4040035 Modelling 2023-11-17 Modelling 2023-11-17 4 4
    Article
    611 10.3390/modelling4040035 https://www.mdpi.com/2673-3951/4/4/35
    Modelling, Vol. 4, Pages 600-610: Modelling Detection Distances to Small Bodies Using Spacecraft Cameras https://www.mdpi.com/2673-3951/4/4/34 Small bodies in the Solar System are appealing targets for scientific and technological space missions, owing to their diversity in intrinsic and extrinsic properties, besides orbit and other factors. Missions to small bodies pass through the critical onboard object detection phase, where the body’s light becomes visible to the spacecraft camera. The relative line-of-sight to the object is acquired and processed to feed relative guidance and navigation algorithms, therefore steering the spacecraft trajectory towards the target. This work assesses the distance of detection for each small body in the Solar System considering the target radiometric properties, three typical spacecraft camera setups, and the relative observation geometry by virtue of a radiometric model. Several uncertainties and noises are considered in the modelling of the detection process. The detection distances for each known small body are determined for small-, medium-, and large-class spacecraft. This proves useful for early mission design phases, where a waypoint for detection needs to be determined, allowing the shift from an absolute to a relative guidance and navigation phase. The work produces an extensive dataset that is freely accessible and useful for teams working on the design phases of space missions. 2023-11-17 Modelling, Vol. 4, Pages 600-610: Modelling Detection Distances to Small Bodies Using Spacecraft Cameras

    Modelling doi: 10.3390/modelling4040034

    Authors: Vittorio Franzese Andreas Makoto Hein

    Small bodies in the Solar System are appealing targets for scientific and technological space missions, owing to their diversity in intrinsic and extrinsic properties, besides orbit and other factors. Missions to small bodies pass through the critical onboard object detection phase, where the body’s light becomes visible to the spacecraft camera. The relative line-of-sight to the object is acquired and processed to feed relative guidance and navigation algorithms, therefore steering the spacecraft trajectory towards the target. This work assesses the distance of detection for each small body in the Solar System considering the target radiometric properties, three typical spacecraft camera setups, and the relative observation geometry by virtue of a radiometric model. Several uncertainties and noises are considered in the modelling of the detection process. The detection distances for each known small body are determined for small-, medium-, and large-class spacecraft. This proves useful for early mission design phases, where a waypoint for detection needs to be determined, allowing the shift from an absolute to a relative guidance and navigation phase. The work produces an extensive dataset that is freely accessible and useful for teams working on the design phases of space missions.

    ]]>
    Modelling Detection Distances to Small Bodies Using Spacecraft Cameras Vittorio Franzese Andreas Makoto Hein doi: 10.3390/modelling4040034 Modelling 2023-11-17 Modelling 2023-11-17 4 4
    Article
    600 10.3390/modelling4040034 https://www.mdpi.com/2673-3951/4/4/34
    Modelling, Vol. 4, Pages 585-599: An Extension of the Susceptible–Infected Model and Its Application to the Analysis of Information Dissemination in Social Networks https://www.mdpi.com/2673-3951/4/4/33 Social media significantly influences business, politics, and society. Easy access and interaction among users allow information to spread rapidly across social networks. Understanding how information is disseminated through these new publishing methods is crucial for political and marketing purposes. However, modeling and predicting information diffusion is challenging due to the complex interactions between network users. This study proposes an analytical approach based on diffusion models to predict the number of social media users engaging in discussions on a topic. We develop a modified version of the susceptible–infected (SI) model that considers the heterogeneity of interactions between users in complex networks. Our model considers the network structure, abandons the assumption of homogeneous mixing, and focuses on information diffusion in scale-free networks. We provide explicit algorithms for modeling information propagation on different types of random graphs and real network structures. We compare our model with alternative approaches, both those considering network structure and those that do not. The accuracy of our model in predicting the number of informed nodes in simulated information diffusion networks demonstrates its effectiveness in describing and predicting information dissemination in social networks. This study highlights the potential of graph-based epidemic models in analyzing online discussion topics and understanding other phenomena spreading on social networks. 2023-11-15 Modelling, Vol. 4, Pages 585-599: An Extension of the Susceptible–Infected Model and Its Application to the Analysis of Information Dissemination in Social Networks

    Modelling doi: 10.3390/modelling4040033

    Authors: Sergei Sidorov Alexey Faizliev Sophia Tikhonova

    Social media significantly influences business, politics, and society. Easy access and interaction among users allow information to spread rapidly across social networks. Understanding how information is disseminated through these new publishing methods is crucial for political and marketing purposes. However, modeling and predicting information diffusion is challenging due to the complex interactions between network users. This study proposes an analytical approach based on diffusion models to predict the number of social media users engaging in discussions on a topic. We develop a modified version of the susceptible–infected (SI) model that considers the heterogeneity of interactions between users in complex networks. Our model considers the network structure, abandons the assumption of homogeneous mixing, and focuses on information diffusion in scale-free networks. We provide explicit algorithms for modeling information propagation on different types of random graphs and real network structures. We compare our model with alternative approaches, both those considering network structure and those that do not. The accuracy of our model in predicting the number of informed nodes in simulated information diffusion networks demonstrates its effectiveness in describing and predicting information dissemination in social networks. This study highlights the potential of graph-based epidemic models in analyzing online discussion topics and understanding other phenomena spreading on social networks.

    ]]>
    An Extension of the Susceptible–Infected Model and Its Application to the Analysis of Information Dissemination in Social Networks Sergei Sidorov Alexey Faizliev Sophia Tikhonova doi: 10.3390/modelling4040033 Modelling 2023-11-15 Modelling 2023-11-15 4 4
    Article
    585 10.3390/modelling4040033 https://www.mdpi.com/2673-3951/4/4/33
    Modelling, Vol. 4, Pages 567-584: Using Discrete-Event Simulation to Balance Staff Allocation and Patient Flow between Clinic and Surgery https://www.mdpi.com/2673-3951/4/4/32 We consider the problem of system-level balanced scheduling in a pediatric hospital setting. A hospital clinic has a queue for patients needing care. After being seen in clinic, many require follow-up surgery, for which they also wait in a queue. The rate-limiting factor is physician availability for both clinic visits and surgical cases. Although much existing work has been done to optimize clinic appointments, as well as to optimize surgical appointments, this novel approach models the entire patient journey at the system level, through both clinic and surgery, to optimize the total patient experience. A discrete-event simulation model of the system was built based on historic patient encounter data and validated. The system model was then optimized to determine the best allocation of physician resources across the system to minimize total patient wait time using machine learning. The results were then compared to baseline. 2023-11-15 Modelling, Vol. 4, Pages 567-584: Using Discrete-Event Simulation to Balance Staff Allocation and Patient Flow between Clinic and Surgery

    Modelling doi: 10.3390/modelling4040032

    Authors: John J. Forbus Daniel Berleant

    We consider the problem of system-level balanced scheduling in a pediatric hospital setting. A hospital clinic has a queue for patients needing care. After being seen in clinic, many require follow-up surgery, for which they also wait in a queue. The rate-limiting factor is physician availability for both clinic visits and surgical cases. Although much existing work has been done to optimize clinic appointments, as well as to optimize surgical appointments, this novel approach models the entire patient journey at the system level, through both clinic and surgery, to optimize the total patient experience. A discrete-event simulation model of the system was built based on historic patient encounter data and validated. The system model was then optimized to determine the best allocation of physician resources across the system to minimize total patient wait time using machine learning. The results were then compared to baseline.

    ]]>
    Using Discrete-Event Simulation to Balance Staff Allocation and Patient Flow between Clinic and Surgery John J. Forbus Daniel Berleant doi: 10.3390/modelling4040032 Modelling 2023-11-15 Modelling 2023-11-15 4 4
    Article
    567 10.3390/modelling4040032 https://www.mdpi.com/2673-3951/4/4/32
    Modelling, Vol. 4, Pages 548-566: Finite Element Modeling Aspects of Buried Large Diameter Steel Pipe–Butterfly Valve Interaction https://www.mdpi.com/2673-3951/4/4/31 Buried flexible pipes are allowed to deflect up to 2–5% of the pipe diameter, which can become problematic for the connected direct-bury, large-diameter butterfly valves. The complex behavior of the pipe–valve–soil system makes it difficult to predict the deflection of the pipe/valve system. In the absence of field/experimental studies, the application of finite element analysis (FEA) seems necessary to predict deflection and stresses and to avoid potential downtime associated with disruption of service. This paper described the FEA of a large-diameter pipe–valve system, with different backfills under gravity, overburden, and internal pressure loads. The effects of modeling different components of the system (e.g., flanges, bearing housing, gate disc, etc.) were described and investigated. The goal of this study was to provide insight into the design and installation of direct-bury pipe–valve systems and evaluate current installation methods in the absence of guidelines. In addition, the level of modeling details required for FEA to yield accurate results was discussed. 2023-11-10 Modelling, Vol. 4, Pages 548-566: Finite Element Modeling Aspects of Buried Large Diameter Steel Pipe–Butterfly Valve Interaction

    Modelling doi: 10.3390/modelling4040031

    Authors: Ashraf Mohammed Daradkeh Himan Hojat Jalali

    Buried flexible pipes are allowed to deflect up to 2–5% of the pipe diameter, which can become problematic for the connected direct-bury, large-diameter butterfly valves. The complex behavior of the pipe–valve–soil system makes it difficult to predict the deflection of the pipe/valve system. In the absence of field/experimental studies, the application of finite element analysis (FEA) seems necessary to predict deflection and stresses and to avoid potential downtime associated with disruption of service. This paper described the FEA of a large-diameter pipe–valve system, with different backfills under gravity, overburden, and internal pressure loads. The effects of modeling different components of the system (e.g., flanges, bearing housing, gate disc, etc.) were described and investigated. The goal of this study was to provide insight into the design and installation of direct-bury pipe–valve systems and evaluate current installation methods in the absence of guidelines. In addition, the level of modeling details required for FEA to yield accurate results was discussed.

    ]]>
    Finite Element Modeling Aspects of Buried Large Diameter Steel Pipe–Butterfly Valve Interaction Ashraf Mohammed Daradkeh Himan Hojat Jalali doi: 10.3390/modelling4040031 Modelling 2023-11-10 Modelling 2023-11-10 4 4
    Article
    548 10.3390/modelling4040031 https://www.mdpi.com/2673-3951/4/4/31
    Modelling, Vol. 4, Pages 529-547: The Data Assimilation Approach in a Multilayered Uncertainty Space https://www.mdpi.com/2673-3951/4/4/30 The simultaneous consideration of a numerical model and of different observations can be achieved using data-assimilation methods. In this contribution, the ensemble Kalman filter (EnKF) is applied to obtain the system-state development and also an estimation of unknown model parameters. An extension of the Kalman filter used is presented for the case of uncertain model parameters, which should not or cannot be estimated due to a lack of necessary measurements. It is shown that incorrectly assumed probability density functions for present uncertainties adversely affect the model parameter to be estimated. Therefore, the problem is embedded in a multilayered uncertainty space consisting of the stochastic space, the interval space, and the fuzzy space. Then, we propose classifying all present uncertainties into aleatory and epistemic ones. Aleatorically uncertain parameters can be used directly within the EnKF without an increase in computational costs and without the necessity of additional methods for the output evaluation. Epistemically uncertain parameters cannot be integrated into the classical EnKF procedure, so a multilayered uncertainty space is defined, leading to inevitable higher computational costs. Various possibilities for uncertainty quantification based on probability and possibility theory are shown, and the influence on the results is analyzed in an academic example. Here, uncertainties in the initial conditions are of less importance compared to uncertainties in system parameters that continuously influence the system state and the model parameter estimation. Finally, the proposed extension using a multilayered uncertainty space is applied on a multi-degree-of-freedom (MDOF) laboratory structure: a beam made of stainless steel with synthetic data or real measured data of vertical accelerations. Young’s modulus as a model parameter can be estimated in a reasonable range, independently of the measurement data generation. 2023-11-08 Modelling, Vol. 4, Pages 529-547: The Data Assimilation Approach in a Multilayered Uncertainty Space

    Modelling doi: 10.3390/modelling4040030

    Authors: Martin Drieschner Clemens Herrmann Yuri Petryna

    The simultaneous consideration of a numerical model and of different observations can be achieved using data-assimilation methods. In this contribution, the ensemble Kalman filter (EnKF) is applied to obtain the system-state development and also an estimation of unknown model parameters. An extension of the Kalman filter used is presented for the case of uncertain model parameters, which should not or cannot be estimated due to a lack of necessary measurements. It is shown that incorrectly assumed probability density functions for present uncertainties adversely affect the model parameter to be estimated. Therefore, the problem is embedded in a multilayered uncertainty space consisting of the stochastic space, the interval space, and the fuzzy space. Then, we propose classifying all present uncertainties into aleatory and epistemic ones. Aleatorically uncertain parameters can be used directly within the EnKF without an increase in computational costs and without the necessity of additional methods for the output evaluation. Epistemically uncertain parameters cannot be integrated into the classical EnKF procedure, so a multilayered uncertainty space is defined, leading to inevitable higher computational costs. Various possibilities for uncertainty quantification based on probability and possibility theory are shown, and the influence on the results is analyzed in an academic example. Here, uncertainties in the initial conditions are of less importance compared to uncertainties in system parameters that continuously influence the system state and the model parameter estimation. Finally, the proposed extension using a multilayered uncertainty space is applied on a multi-degree-of-freedom (MDOF) laboratory structure: a beam made of stainless steel with synthetic data or real measured data of vertical accelerations. Young’s modulus as a model parameter can be estimated in a reasonable range, independently of the measurement data generation.

    ]]>
    The Data Assimilation Approach in a Multilayered Uncertainty Space Martin Drieschner Clemens Herrmann Yuri Petryna doi: 10.3390/modelling4040030 Modelling 2023-11-08 Modelling 2023-11-08 4 4
    Article
    529 10.3390/modelling4040030 https://www.mdpi.com/2673-3951/4/4/30
    Modelling, Vol. 4, Pages 515-528: Supply-Driven Analysis for a Continuous Water Supply Network Based on a Pressure-Based Outflow at the House Outlets under Peak Withdrawal Scenarios https://www.mdpi.com/2673-3951/4/4/29 This research brings a new analysis method for a continuous water supply distribution network. The number of house service connections in different story buildings, rather than the nodal peak demand, shall be accounted for in the analysis. This work aims to consider the flow when pipes are opened in the house plumbing systems. The approach deviates from a traditional peak demand-based analysis of the water distribution network. The analysis gives the flow rate at each nodal point that could be observed in the different story buildings. The methodology is applied to a hypothetical network and shows how much flow and nodal pressure can occur when different percentages of consumers are in an active state. This study indicates that the peak demand-based sizing of the supply pipes could have a deficient capacity under real scenarios. The proposed analysis method will help to understand the actual behavior of the network. 2023-11-08 Modelling, Vol. 4, Pages 515-528: Supply-Driven Analysis for a Continuous Water Supply Network Based on a Pressure-Based Outflow at the House Outlets under Peak Withdrawal Scenarios

    Modelling doi: 10.3390/modelling4040029

    Authors: Conety Ravi Suribabu Neelakantan Renganathan Thurvas Perumal Sivakumar

    This research brings a new analysis method for a continuous water supply distribution network. The number of house service connections in different story buildings, rather than the nodal peak demand, shall be accounted for in the analysis. This work aims to consider the flow when pipes are opened in the house plumbing systems. The approach deviates from a traditional peak demand-based analysis of the water distribution network. The analysis gives the flow rate at each nodal point that could be observed in the different story buildings. The methodology is applied to a hypothetical network and shows how much flow and nodal pressure can occur when different percentages of consumers are in an active state. This study indicates that the peak demand-based sizing of the supply pipes could have a deficient capacity under real scenarios. The proposed analysis method will help to understand the actual behavior of the network.

    ]]>
    Supply-Driven Analysis for a Continuous Water Supply Network Based on a Pressure-Based Outflow at the House Outlets under Peak Withdrawal Scenarios Conety Ravi Suribabu Neelakantan Renganathan Thurvas Perumal Sivakumar doi: 10.3390/modelling4040029 Modelling 2023-11-08 Modelling 2023-11-08 4 4
    Article
    515 10.3390/modelling4040029 https://www.mdpi.com/2673-3951/4/4/29
    Modelling, Vol. 4, Pages 485-514: Reduced-Dimension Surrogate Modeling to Characterize the Damage Tolerance of Composite/Metal Structures https://www.mdpi.com/2673-3951/4/4/28 Complex engineering models are typically computationally demanding and defined by a high-dimensional parameter space challenging the comprehensive exploration of parameter effects and design optimization. To overcome this curse of dimensionality and to minimize computational resource requirements, this research demonstrates a user-friendly approach to formulating a reduced-dimension surrogate model that represents a high-dimensional, high-fidelity source model. This approach was developed specifically for a non-expert using commercially available tools. In this approach, the complex physical behavior of the high-fidelity source model is separated into individual, interacting physical behaviors. A separate reduced-dimension surrogate model is created for each behavior and then all are summed to formulate the reduced-dimension surrogate model representing the source model. In addition to a substantial reduction in computational resources and comparable accuracy, this method also provides a characterization of each individual behavior providing additional insight into the source model behavior. The approach encompasses experimental testing, finite element analysis, surrogate modeling, and sensitivity analysis and is demonstrated by formulating a reduced-dimension surrogate model for the damage tolerance of an aluminum plate reinforced with a co-cured bonded E-glass/epoxy composite laminate under four-point bending. It is concluded that this problem is difficult to characterize and breaking the problem into interacting mechanisms leads to improved information on influential parameters and efficient reduced-dimension surrogate modeling. The disbond damage at the interface between the resin and metal proved the most difficult mechanism for reduced-dimension surrogate modeling as it is only engaged in a small subspace of the full parameter space. A binary function was successful in engaging this damage mechanism when applicable based on the values of the most influential parameters. 2023-11-07 Modelling, Vol. 4, Pages 485-514: Reduced-Dimension Surrogate Modeling to Characterize the Damage Tolerance of Composite/Metal Structures

    Modelling doi: 10.3390/modelling4040028

    Authors: Corey Arndt Cody Crusenberry Bozhi Heng Rochelle Butler Stephanie TerMaath

    Complex engineering models are typically computationally demanding and defined by a high-dimensional parameter space challenging the comprehensive exploration of parameter effects and design optimization. To overcome this curse of dimensionality and to minimize computational resource requirements, this research demonstrates a user-friendly approach to formulating a reduced-dimension surrogate model that represents a high-dimensional, high-fidelity source model. This approach was developed specifically for a non-expert using commercially available tools. In this approach, the complex physical behavior of the high-fidelity source model is separated into individual, interacting physical behaviors. A separate reduced-dimension surrogate model is created for each behavior and then all are summed to formulate the reduced-dimension surrogate model representing the source model. In addition to a substantial reduction in computational resources and comparable accuracy, this method also provides a characterization of each individual behavior providing additional insight into the source model behavior. The approach encompasses experimental testing, finite element analysis, surrogate modeling, and sensitivity analysis and is demonstrated by formulating a reduced-dimension surrogate model for the damage tolerance of an aluminum plate reinforced with a co-cured bonded E-glass/epoxy composite laminate under four-point bending. It is concluded that this problem is difficult to characterize and breaking the problem into interacting mechanisms leads to improved information on influential parameters and efficient reduced-dimension surrogate modeling. The disbond damage at the interface between the resin and metal proved the most difficult mechanism for reduced-dimension surrogate modeling as it is only engaged in a small subspace of the full parameter space. A binary function was successful in engaging this damage mechanism when applicable based on the values of the most influential parameters.

    ]]>
    Reduced-Dimension Surrogate Modeling to Characterize the Damage Tolerance of Composite/Metal Structures Corey Arndt Cody Crusenberry Bozhi Heng Rochelle Butler Stephanie TerMaath doi: 10.3390/modelling4040028 Modelling 2023-11-07 Modelling 2023-11-07 4 4
    Article
    485 10.3390/modelling4040028 https://www.mdpi.com/2673-3951/4/4/28
    Modelling, Vol. 4, Pages 470-484: Autoignition Problem in Homogeneous Combustion Systems: GQL versus QSSA Combined with DRG https://www.mdpi.com/2673-3951/4/4/27 The global quasi-linearization (GQL) is used as a method to study and to reduce the complexity of mathematical models of mechanisms of chemical kinetics. Similar to standard methodologies, such as the quasi-steady-state assumption (QSSA), the GQL method defines the fast and slow invariant subspaces and uses slow manifolds to gain a reduced representation. It does not require empirical inputs and is based on the eigenvalue and eigenvector decomposition of a linear map approximating the nonlinear vector field of the original system. In the present work, the GQL-based slow/fast decomposition is applied for different combustion systems. The results are compared with the standard QSSA approach. For this, an implicit implementation strategy described by differential algebraic equations (DAEs) systems is suggested and used, which allows for treating both approaches within the same computational framework. Hydrogen–air (with 9 species) and ethanol–air (with 57 species) combustion systems are considered representative examples to illustrate and verify the GQL. The results show that 4D GQL for hydrogen–air and 14D GQL ethanol–air slow manifolds outperform the standard QSSA approach based on a DAE-based reduced computation model. 2023-10-25 Modelling, Vol. 4, Pages 470-484: Autoignition Problem in Homogeneous Combustion Systems: GQL versus QSSA Combined with DRG

    Modelling doi: 10.3390/modelling4040027

    Authors: Chunkan Yu Sudhi Shashidharan Shuyang Wu Felipe Minuzzi Viatcheslav Bykov

    The global quasi-linearization (GQL) is used as a method to study and to reduce the complexity of mathematical models of mechanisms of chemical kinetics. Similar to standard methodologies, such as the quasi-steady-state assumption (QSSA), the GQL method defines the fast and slow invariant subspaces and uses slow manifolds to gain a reduced representation. It does not require empirical inputs and is based on the eigenvalue and eigenvector decomposition of a linear map approximating the nonlinear vector field of the original system. In the present work, the GQL-based slow/fast decomposition is applied for different combustion systems. The results are compared with the standard QSSA approach. For this, an implicit implementation strategy described by differential algebraic equations (DAEs) systems is suggested and used, which allows for treating both approaches within the same computational framework. Hydrogen–air (with 9 species) and ethanol–air (with 57 species) combustion systems are considered representative examples to illustrate and verify the GQL. The results show that 4D GQL for hydrogen–air and 14D GQL ethanol–air slow manifolds outperform the standard QSSA approach based on a DAE-based reduced computation model.

    ]]>
    Autoignition Problem in Homogeneous Combustion Systems: GQL versus QSSA Combined with DRG Chunkan Yu Sudhi Shashidharan Shuyang Wu Felipe Minuzzi Viatcheslav Bykov doi: 10.3390/modelling4040027 Modelling 2023-10-25 Modelling 2023-10-25 4 4
    Article
    470 10.3390/modelling4040027 https://www.mdpi.com/2673-3951/4/4/27
    Modelling, Vol. 4, Pages 454-469: Experimental Validation of Finite Element Models for Open-to-CHS Column Connections https://www.mdpi.com/2673-3951/4/4/26 The conventional ways to construct an open-to-circular hollow section (CHS) connection are either to directly weld the open section to the CHS column wall or to use local stiffeners (e.g., diaphragms) and gusset plates to connect the two structural components. These construction methods often subject the CHS to severe local distortions and/or require high welding quantities, hindering the real-life application of hollow sections. To overcome such difficulties, this study proposes two types of moment-resisting “passing-through” connection configurations, developed within the European research project “LASTEICON”. These configurations consist of main beams connected to the CHS column via either an I-section or individual steel plates passing through the CHS column. The passing-through system is implemented using laser cut and weld technology and efficiently avoids excessive use of stiffening plates, local damages on the CHS wall and premature flange failures. The proposed configurations are investigated experimentally and numerically under two different load cases in order to characterize their structural behaviour. Finite element models have been developed and calibrated with respect to the experimental force–displacement behaviour of the connections as well as their observed failure modes. The efficiency, benefits, and limitations of the modelling approach were discussed through a detailed comparison study between the experimental and numerical results. 2023-10-16 Modelling, Vol. 4, Pages 454-469: Experimental Validation of Finite Element Models for Open-to-CHS Column Connections

    Modelling doi: 10.3390/modelling4040026

    Authors: Rajarshi Das Alper Kanyilmaz Herve Degee

    The conventional ways to construct an open-to-circular hollow section (CHS) connection are either to directly weld the open section to the CHS column wall or to use local stiffeners (e.g., diaphragms) and gusset plates to connect the two structural components. These construction methods often subject the CHS to severe local distortions and/or require high welding quantities, hindering the real-life application of hollow sections. To overcome such difficulties, this study proposes two types of moment-resisting “passing-through” connection configurations, developed within the European research project “LASTEICON”. These configurations consist of main beams connected to the CHS column via either an I-section or individual steel plates passing through the CHS column. The passing-through system is implemented using laser cut and weld technology and efficiently avoids excessive use of stiffening plates, local damages on the CHS wall and premature flange failures. The proposed configurations are investigated experimentally and numerically under two different load cases in order to characterize their structural behaviour. Finite element models have been developed and calibrated with respect to the experimental force–displacement behaviour of the connections as well as their observed failure modes. The efficiency, benefits, and limitations of the modelling approach were discussed through a detailed comparison study between the experimental and numerical results.

    ]]>
    Experimental Validation of Finite Element Models for Open-to-CHS Column Connections Rajarshi Das Alper Kanyilmaz Herve Degee doi: 10.3390/modelling4040026 Modelling 2023-10-16 Modelling 2023-10-16 4 4
    Article
    454 10.3390/modelling4040026 https://www.mdpi.com/2673-3951/4/4/26
    Modelling, Vol. 4, Pages 426-453: Machine Learning in the Stochastic Analysis of Slope Stability: A State-of-the-Art Review https://www.mdpi.com/2673-3951/4/4/25 In traditional slope stability analysis, it is assumed that some “average” or appropriately “conservative” properties operate over the entire region of interest. This kind of deterministic conservative analysis often results in higher costs, and thus, a stochastic analysis considering uncertainty and spatial variability was developed to reduce costs. In the past few decades, machine learning has been greatly developed and extensively used in stochastic slope stability analysis, particularly used as surrogate models to improve computational efficiency. To better summarize the current application of machine learning and future research, this paper reviews 159 studies of supervised learning published in the past 20 years. The achievements of machine learning methods are summarized from two aspects—safety factor prediction and slope stability classification. Four potential research challenges and suggestions are also given. 2023-10-01 Modelling, Vol. 4, Pages 426-453: Machine Learning in the Stochastic Analysis of Slope Stability: A State-of-the-Art Review

    Modelling doi: 10.3390/modelling4040025

    Authors: Haoding Xu Xuzhen He Feng Shan Gang Niu Daichao Sheng

    In traditional slope stability analysis, it is assumed that some “average” or appropriately “conservative” properties operate over the entire region of interest. This kind of deterministic conservative analysis often results in higher costs, and thus, a stochastic analysis considering uncertainty and spatial variability was developed to reduce costs. In the past few decades, machine learning has been greatly developed and extensively used in stochastic slope stability analysis, particularly used as surrogate models to improve computational efficiency. To better summarize the current application of machine learning and future research, this paper reviews 159 studies of supervised learning published in the past 20 years. The achievements of machine learning methods are summarized from two aspects—safety factor prediction and slope stability classification. Four potential research challenges and suggestions are also given.

    ]]>
    Machine Learning in the Stochastic Analysis of Slope Stability: A State-of-the-Art Review Haoding Xu Xuzhen He Feng Shan Gang Niu Daichao Sheng doi: 10.3390/modelling4040025 Modelling 2023-10-01 Modelling 2023-10-01 4 4
    Review
    426 10.3390/modelling4040025 https://www.mdpi.com/2673-3951/4/4/25
    Modelling, Vol. 4, Pages 408-425: Li+ Separation from Multi-Ionic Mixtures by Nanofiltration Membranes: Experiments and Modeling https://www.mdpi.com/2673-3951/4/3/24 Aqueous sources like salt lake brines and seawater are the most abundant source for lithium ions and might contribute to the growing demand for lithium for energy storage. By coupling with the increasingly relevant reverse osmosis systems, nanofiltration can provide a promising process alternative to conventional methods such as water evaporation and salt precipitation from ores or brines for this purpose. One possible model for nanofiltration is the solution-diffusion-electromigration model (SDEM). First, the model was parametrized by determining the permeances from simple electrolyte mixtures containing two salts. Then, the SDEM was used to predict the rejections of complex multi-electrolyte solutions that mimic seawater and reverse osmosis brine, without fitting additional parameters to experimental data of this complex mixture. This allowed predicting ion rejections satisfactorily. Negative rejections due to spontaneously generated electric fields in the membrane could also be qualitatively described. In summary, this SDEM modeling can provide an important contribution to the purification of Li+ from aqueous sources. 2023-09-20 Modelling, Vol. 4, Pages 408-425: Li+ Separation from Multi-Ionic Mixtures by Nanofiltration Membranes: Experiments and Modeling

    Modelling doi: 10.3390/modelling4030024

    Authors: Tobias Hubach Marcel Pillath Clemens Knaup Stefan Schlüter Christoph Held

    Aqueous sources like salt lake brines and seawater are the most abundant source for lithium ions and might contribute to the growing demand for lithium for energy storage. By coupling with the increasingly relevant reverse osmosis systems, nanofiltration can provide a promising process alternative to conventional methods such as water evaporation and salt precipitation from ores or brines for this purpose. One possible model for nanofiltration is the solution-diffusion-electromigration model (SDEM). First, the model was parametrized by determining the permeances from simple electrolyte mixtures containing two salts. Then, the SDEM was used to predict the rejections of complex multi-electrolyte solutions that mimic seawater and reverse osmosis brine, without fitting additional parameters to experimental data of this complex mixture. This allowed predicting ion rejections satisfactorily. Negative rejections due to spontaneously generated electric fields in the membrane could also be qualitatively described. In summary, this SDEM modeling can provide an important contribution to the purification of Li+ from aqueous sources.

    ]]>
    Li+ Separation from Multi-Ionic Mixtures by Nanofiltration Membranes: Experiments and Modeling Tobias Hubach Marcel Pillath Clemens Knaup Stefan Schlüter Christoph Held doi: 10.3390/modelling4030024 Modelling 2023-09-20 Modelling 2023-09-20 4 3
    Article
    408 10.3390/modelling4030024 https://www.mdpi.com/2673-3951/4/3/24
    Modelling, Vol. 4, Pages 394-407: Investigating Ice Loads on Subsea Pipelines with Cohesive Zone Model in Abaqus https://www.mdpi.com/2673-3951/4/3/23 Subsea pipelines and cables placed in ice-prone regions may be at risk of iceberg damage. In particular, pipes that are not buried may come in direct contact with iceberg keels. Knowing the range of interaction forces helps to assess the types and magnitudes of potential damage. Experimental studies provide the most valuable data about the interaction forces, while numerical modeling may give insight into configurations that are difficult to study experimentally. This work applies the cohesive zone model to investigate the fracture behavior of ice samples. Simulations are performed in 2D with Abaqus explicit solver. Modeled interaction forces from multiple simulations are recorded and compared to understand how the geometry of the samples affects the fracture. Repeat interactions with different grain configurations are conducted to investigate associated variance in fracture patterns and loads. t-tests show that the force application angle and the indenter’s position significantly affect the fracture force. 2023-09-14 Modelling, Vol. 4, Pages 394-407: Investigating Ice Loads on Subsea Pipelines with Cohesive Zone Model in Abaqus

    Modelling doi: 10.3390/modelling4030023

    Authors: Igor Gribanov Rocky Taylor Jan Thijssen Mark Fuglem

    Subsea pipelines and cables placed in ice-prone regions may be at risk of iceberg damage. In particular, pipes that are not buried may come in direct contact with iceberg keels. Knowing the range of interaction forces helps to assess the types and magnitudes of potential damage. Experimental studies provide the most valuable data about the interaction forces, while numerical modeling may give insight into configurations that are difficult to study experimentally. This work applies the cohesive zone model to investigate the fracture behavior of ice samples. Simulations are performed in 2D with Abaqus explicit solver. Modeled interaction forces from multiple simulations are recorded and compared to understand how the geometry of the samples affects the fracture. Repeat interactions with different grain configurations are conducted to investigate associated variance in fracture patterns and loads. t-tests show that the force application angle and the indenter’s position significantly affect the fracture force.

    ]]>
    Investigating Ice Loads on Subsea Pipelines with Cohesive Zone Model in Abaqus Igor Gribanov Rocky Taylor Jan Thijssen Mark Fuglem doi: 10.3390/modelling4030023 Modelling 2023-09-14 Modelling 2023-09-14 4 3
    Article
    394 10.3390/modelling4030023 https://www.mdpi.com/2673-3951/4/3/23
    Modelling, Vol. 4, Pages 382-393: Modelling and Simulating the Digital Measuring Twin Based on CMM https://www.mdpi.com/2673-3951/4/3/22 In order to perform the inspection planning process on the coordinate measuring machine (CMM), it is necessary to model the measuring system with workpiece, CMM and fixture. The metrological analysis of the workpiece is then conducted, followed by the creation of a measurement program for simulation on a virtual measuring machine in a CAD environment. This paper presents the modelling and simulation of a virtual measuring system based on a real CMM using PTC Creo Parametric 5.0 software. The simulation involved programming the measuring path and generating a DMIS (*.ncl) file, which represents the standard modelled types of tolerance. The analysis of the metrology of the measuring part for the given forms of tolerance (location, perpendicularity, flatness, etc.) was performed. The components of the CMM and the assembly with defined kinematic connections are also modelled. Following the simulation and generation of the output DMIS file in PTC Creo using the virtual CMM, the real CMM was programmed and used for actual measurements. Subsequently, a measurement report was generated. The main result of this paper is the modelling of an offline Digital Measuring Twin (DMT) based on the DMIS file. 2023-08-17 Modelling, Vol. 4, Pages 382-393: Modelling and Simulating the Digital Measuring Twin Based on CMM

    Modelling doi: 10.3390/modelling4030022

    Authors: Miladin A. Marjanovic Slavenko M. Stojadinovic Sasa T. Zivanovic

    In order to perform the inspection planning process on the coordinate measuring machine (CMM), it is necessary to model the measuring system with workpiece, CMM and fixture. The metrological analysis of the workpiece is then conducted, followed by the creation of a measurement program for simulation on a virtual measuring machine in a CAD environment. This paper presents the modelling and simulation of a virtual measuring system based on a real CMM using PTC Creo Parametric 5.0 software. The simulation involved programming the measuring path and generating a DMIS (*.ncl) file, which represents the standard modelled types of tolerance. The analysis of the metrology of the measuring part for the given forms of tolerance (location, perpendicularity, flatness, etc.) was performed. The components of the CMM and the assembly with defined kinematic connections are also modelled. Following the simulation and generation of the output DMIS file in PTC Creo using the virtual CMM, the real CMM was programmed and used for actual measurements. Subsequently, a measurement report was generated. The main result of this paper is the modelling of an offline Digital Measuring Twin (DMT) based on the DMIS file.

    ]]>
    Modelling and Simulating the Digital Measuring Twin Based on CMM Miladin A. Marjanovic Slavenko M. Stojadinovic Sasa T. Zivanovic doi: 10.3390/modelling4030022 Modelling 2023-08-17 Modelling 2023-08-17 4 3
    Article
    382 10.3390/modelling4030022 https://www.mdpi.com/2673-3951/4/3/22
    Modelling, Vol. 4, Pages 366-381: A Second-Order Dynamic Friction Model Compared to Commercial Stick–Slip Models https://www.mdpi.com/2673-3951/4/3/21 Friction has long been an important issue in multibody dynamics. Static friction models apply appropriate regularization techniques to convert the stick inequality and the non-smooth stick–slip transition of Coulomb’s approach into a continuous and smooth function of the sliding velocity. However, a regularized friction force is not able to maintain long-term stick. That is why dynamic friction models were developed in recent decades. The friction force depends herein not only on the sliding velocity but also on internal states. The probably best-known representative, the LuGre friction model, is based on a fictitious bristle but realizes a too-simple approximation. The recently published second-order dynamic friction model describes the dynamics of a fictitious bristle more accurately. It is based on a regularized friction force characteristic, which is continuous and smooth but can maintain long-term stick due to an appropriate shift in the regularization. Its performance is compared here to stick–slip friction models, developed and launched not long ago by commercial multibody software packages. The results obtained by a virtual friction test-bench and by a more practical festoon cable system are very promising. Thus, the second-order dynamic friction model may serve not only as an alternative to the LuGre model but also to commercial stick–slip models. 2023-08-11 Modelling, Vol. 4, Pages 366-381: A Second-Order Dynamic Friction Model Compared to Commercial Stick–Slip Models

    Modelling doi: 10.3390/modelling4030021

    Authors: Georg Rill Matthias Schuderer

    Friction has long been an important issue in multibody dynamics. Static friction models apply appropriate regularization techniques to convert the stick inequality and the non-smooth stick–slip transition of Coulomb’s approach into a continuous and smooth function of the sliding velocity. However, a regularized friction force is not able to maintain long-term stick. That is why dynamic friction models were developed in recent decades. The friction force depends herein not only on the sliding velocity but also on internal states. The probably best-known representative, the LuGre friction model, is based on a fictitious bristle but realizes a too-simple approximation. The recently published second-order dynamic friction model describes the dynamics of a fictitious bristle more accurately. It is based on a regularized friction force characteristic, which is continuous and smooth but can maintain long-term stick due to an appropriate shift in the regularization. Its performance is compared here to stick–slip friction models, developed and launched not long ago by commercial multibody software packages. The results obtained by a virtual friction test-bench and by a more practical festoon cable system are very promising. Thus, the second-order dynamic friction model may serve not only as an alternative to the LuGre model but also to commercial stick–slip models.

    ]]>
    A Second-Order Dynamic Friction Model Compared to Commercial Stick–Slip Models Georg Rill Matthias Schuderer doi: 10.3390/modelling4030021 Modelling 2023-08-11 Modelling 2023-08-11 4 3
    Article
    366 10.3390/modelling4030021 https://www.mdpi.com/2673-3951/4/3/21
    Modelling, Vol. 4, Pages 351-365: Modeling of Human-Exoskeleton Alignment and Its Effect on the Elbow Flexor and Extensor Muscles during Rehabilitation https://www.mdpi.com/2673-3951/4/3/20 Human-exoskeleton misalignment could lead to permanent damages upon the targeted limb with long-term use in rehabilitation. Hence, achieving proper alignment is necessary to ensure patient safety and an effective rehabilitative journey. In this study, a joint-based and task-based exoskeleton for upper limb rehabilitation were modeled and assessed. The assessment examined and quantified the misalignment present at the elbow joint as well as its effects on the main flexor and extensor muscles’ tendon length during elbow flexion-extension. The effects of the misalignments found for both exoskeletons resulted to be minimal in most muscles observed, except the anconeus and brachialis. The anconeus muscle demonstrated a relatively higher variation in tendon length with the joint-based exoskeleton misalignment, indicating that the task-based exoskeleton is favored for tasks that involve this particular muscle. Moreover, the brachialis demonstrated a significantly higher variation with the task-based exoskeleton misalignment, indicating that the joint-based exoskeleton is favored for tasks that involve the muscle. 2023-07-20 Modelling, Vol. 4, Pages 351-365: Modeling of Human-Exoskeleton Alignment and Its Effect on the Elbow Flexor and Extensor Muscles during Rehabilitation

    Modelling doi: 10.3390/modelling4030020

    Authors: Clarissa Rincon Pablo Delgado Nils A. Hakansson Yimesker Yihun

    Human-exoskeleton misalignment could lead to permanent damages upon the targeted limb with long-term use in rehabilitation. Hence, achieving proper alignment is necessary to ensure patient safety and an effective rehabilitative journey. In this study, a joint-based and task-based exoskeleton for upper limb rehabilitation were modeled and assessed. The assessment examined and quantified the misalignment present at the elbow joint as well as its effects on the main flexor and extensor muscles’ tendon length during elbow flexion-extension. The effects of the misalignments found for both exoskeletons resulted to be minimal in most muscles observed, except the anconeus and brachialis. The anconeus muscle demonstrated a relatively higher variation in tendon length with the joint-based exoskeleton misalignment, indicating that the task-based exoskeleton is favored for tasks that involve this particular muscle. Moreover, the brachialis demonstrated a significantly higher variation with the task-based exoskeleton misalignment, indicating that the joint-based exoskeleton is favored for tasks that involve the muscle.

    ]]>
    Modeling of Human-Exoskeleton Alignment and Its Effect on the Elbow Flexor and Extensor Muscles during Rehabilitation Clarissa Rincon Pablo Delgado Nils A. Hakansson Yimesker Yihun doi: 10.3390/modelling4030020 Modelling 2023-07-20 Modelling 2023-07-20 4 3
    Article
    351 10.3390/modelling4030020 https://www.mdpi.com/2673-3951/4/3/20
    Modelling, Vol. 4, Pages 336-350: High-Throughput Numerical Investigation of Process Parameter-Melt Pool Relationships in Electron Beam Powder Bed Fusion https://www.mdpi.com/2673-3951/4/3/19 The reliable and repeatable fabrication of complex geometries with predetermined homogeneous properties is still a major challenge in electron beam powder bed fusion (PBF-EB). Although previous research identified a variety of process parameter–property relationships, the underlying end-to-end approach, which directly relates process parameters to material properties, omits the underlying thermal conditions. Since the local properties are governed by the local thermal conditions of the melt pool, the end-to-end approach is insufficient to transfer predetermined properties to complex geometries and different processing conditions. This work utilizes high-throughput thermal simulation for the identification of fundamental relationships between process parameters, processing conditions, and the resulting melt pool geometry in the quasi-stationary state of line-based hatching strategies in PBF-EB. Through a comprehensive study of over 25,000 parameter combinations, including beam power, velocity, line offset, preheating temperature, and beam diameter, process parameter-melt pool relationships are established, processing boundaries are identified, and guidelines for the selection of process parameters to the achieve desired properties under different processing conditions are derived. 2023-07-10 Modelling, Vol. 4, Pages 336-350: High-Throughput Numerical Investigation of Process Parameter-Melt Pool Relationships in Electron Beam Powder Bed Fusion

    Modelling doi: 10.3390/modelling4030019

    Authors: Christoph Breuning Jonas Böhm Matthias Markl Carolin Körner

    The reliable and repeatable fabrication of complex geometries with predetermined homogeneous properties is still a major challenge in electron beam powder bed fusion (PBF-EB). Although previous research identified a variety of process parameter–property relationships, the underlying end-to-end approach, which directly relates process parameters to material properties, omits the underlying thermal conditions. Since the local properties are governed by the local thermal conditions of the melt pool, the end-to-end approach is insufficient to transfer predetermined properties to complex geometries and different processing conditions. This work utilizes high-throughput thermal simulation for the identification of fundamental relationships between process parameters, processing conditions, and the resulting melt pool geometry in the quasi-stationary state of line-based hatching strategies in PBF-EB. Through a comprehensive study of over 25,000 parameter combinations, including beam power, velocity, line offset, preheating temperature, and beam diameter, process parameter-melt pool relationships are established, processing boundaries are identified, and guidelines for the selection of process parameters to the achieve desired properties under different processing conditions are derived.

    ]]>
    High-Throughput Numerical Investigation of Process Parameter-Melt Pool Relationships in Electron Beam Powder Bed Fusion Christoph Breuning Jonas Böhm Matthias Markl Carolin Körner doi: 10.3390/modelling4030019 Modelling 2023-07-10 Modelling 2023-07-10 4 3
    Article
    336 10.3390/modelling4030019 https://www.mdpi.com/2673-3951/4/3/19
    Modelling, Vol. 4, Pages 323-335: Modelling of the Solidifying Microstructure of Inconel 718: Quasi-Binary Approximation https://www.mdpi.com/2673-3951/4/3/18 The prediction of the equilibrium and metastable morphologies during the solidification of Ni-based superalloys on the mesoscopic scale can be performed using phase-field modeling. In the present paper, we apply the phase-field model to simulate the evolution of solidification microstructures depending on undercooling in a quasi-binary approximation. The results of modeling are compared with experimental data obtained on samples of the alloy Inconel 718 (IN718) processed using the electromagnetic leviatation (EML) technique. The final microstructure, concentration profiles of niobium, and the interface-velocity–undercooling relationship predicted by the phase field modeling are in good agreement with the experimental findings. The simulated microstructures and concentration fields can be used as inputs for the simulation of the precipitation of secondary phases. 2023-06-22 Modelling, Vol. 4, Pages 323-335: Modelling of the Solidifying Microstructure of Inconel 718: Quasi-Binary Approximation

    Modelling doi: 10.3390/modelling4030018

    Authors: Nikolai Kropotin Yindong Fang Chu Yu Martin Seyring Katharina Freiberg Stephanie Lippmann Tatu Pinomaa Anssi Laukkanen Nikolas Provatas Peter K. Galenko

    The prediction of the equilibrium and metastable morphologies during the solidification of Ni-based superalloys on the mesoscopic scale can be performed using phase-field modeling. In the present paper, we apply the phase-field model to simulate the evolution of solidification microstructures depending on undercooling in a quasi-binary approximation. The results of modeling are compared with experimental data obtained on samples of the alloy Inconel 718 (IN718) processed using the electromagnetic leviatation (EML) technique. The final microstructure, concentration profiles of niobium, and the interface-velocity–undercooling relationship predicted by the phase field modeling are in good agreement with the experimental findings. The simulated microstructures and concentration fields can be used as inputs for the simulation of the precipitation of secondary phases.

    ]]>
    Modelling of the Solidifying Microstructure of Inconel 718: Quasi-Binary Approximation Nikolai Kropotin Yindong Fang Chu Yu Martin Seyring Katharina Freiberg Stephanie Lippmann Tatu Pinomaa Anssi Laukkanen Nikolas Provatas Peter K. Galenko doi: 10.3390/modelling4030018 Modelling 2023-06-22 Modelling 2023-06-22 4 3
    Article
    323 10.3390/modelling4030018 https://www.mdpi.com/2673-3951/4/3/18
    Modelling, Vol. 4, Pages 296-322: A Novel Mesoscopic Drill Bit Model for Deep Drilling Applications https://www.mdpi.com/2673-3951/4/2/17 This paper deals with the development of a novel mesoscopic model of polycrystalline diamond compact (PDC) drill bits that can be implemented in complex drill string models for simulations to analyse the influence of rock inhomogeneities or the impact of anti-whirl bits on drill string dynamics. In contrast to existing modelling approaches, the model is developed at a mesoscopic level, where the basic bit–rock interaction is taken from the macroscopic bit model and the cutting characteristics are summarised at a microscopic cutting level into a simplified configuration via cutting blades. This model can therefore effectively describe asymmetries and thus interactions between the torsional and lateral dynamics of the drill bit, and is particularly suitable for investigating the effects of drilling into rock inhomogeneities and fault zones on drilling dynamics. By integration into a complex drill string model, simulation studies of drilling through a sandwich formation were carried out. The simulation results allow detailed stability statements and show the influence of formation properties and bit design on torsional and lateral drill string dynamics. 2023-06-20 Modelling, Vol. 4, Pages 296-322: A Novel Mesoscopic Drill Bit Model for Deep Drilling Applications

    Modelling doi: 10.3390/modelling4020017

    Authors: Mohamed Ichaoui Frank Schiefer Georg-Peter Ostermeyer

    This paper deals with the development of a novel mesoscopic model of polycrystalline diamond compact (PDC) drill bits that can be implemented in complex drill string models for simulations to analyse the influence of rock inhomogeneities or the impact of anti-whirl bits on drill string dynamics. In contrast to existing modelling approaches, the model is developed at a mesoscopic level, where the basic bit–rock interaction is taken from the macroscopic bit model and the cutting characteristics are summarised at a microscopic cutting level into a simplified configuration via cutting blades. This model can therefore effectively describe asymmetries and thus interactions between the torsional and lateral dynamics of the drill bit, and is particularly suitable for investigating the effects of drilling into rock inhomogeneities and fault zones on drilling dynamics. By integration into a complex drill string model, simulation studies of drilling through a sandwich formation were carried out. The simulation results allow detailed stability statements and show the influence of formation properties and bit design on torsional and lateral drill string dynamics.

    ]]>
    A Novel Mesoscopic Drill Bit Model for Deep Drilling Applications Mohamed Ichaoui Frank Schiefer Georg-Peter Ostermeyer doi: 10.3390/modelling4020017 Modelling 2023-06-20 Modelling 2023-06-20 4 2
    Article
    296 10.3390/modelling4020017 https://www.mdpi.com/2673-3951/4/2/17
    Modelling, Vol. 4, Pages 283-295: On the Characterization of Viscoelastic Parameters of Polymeric Pipes for Transient Flow Analysis https://www.mdpi.com/2673-3951/4/2/16 The behaviour of polymeric pipes in transient flows has been proved to be viscoelastic. Generalized Kelvin–Voigt (GKV) models perform very well when simulating the experimental pressure. However, in the literature, no general indications on the evaluation of the model parameters are given. In the present study, the calibration of GKV model parameters is carried out using a micro-genetic algorithm for experimental tests of transient flows in polymeric pipes taken from the literature. The results confirm that the higher the number of Kelvin–Voigt elements, the better the reproduction of experimental tests, but it is difficult to search for general rules for parameter characterization. Assuming a Kelvin–Voigt (KV) model with a single element, it is shown that the retardation time is related to the oscillation period that can be obtained from the elastic modulus and from easily evaluable pipe characteristics. A simple procedure is then proposed for the characterization of the viscoelastic parameters that can be used by manufacturers and technicians. Considering the limits of such a model, the procedure has to be considered as a first step for the characterization of the viscoelastic parameters of more complex models. 2023-06-20 Modelling, Vol. 4, Pages 283-295: On the Characterization of Viscoelastic Parameters of Polymeric Pipes for Transient Flow Analysis

    Modelling doi: 10.3390/modelling4020016

    Authors: Giuseppe Pezzinga

    The behaviour of polymeric pipes in transient flows has been proved to be viscoelastic. Generalized Kelvin–Voigt (GKV) models perform very well when simulating the experimental pressure. However, in the literature, no general indications on the evaluation of the model parameters are given. In the present study, the calibration of GKV model parameters is carried out using a micro-genetic algorithm for experimental tests of transient flows in polymeric pipes taken from the literature. The results confirm that the higher the number of Kelvin–Voigt elements, the better the reproduction of experimental tests, but it is difficult to search for general rules for parameter characterization. Assuming a Kelvin–Voigt (KV) model with a single element, it is shown that the retardation time is related to the oscillation period that can be obtained from the elastic modulus and from easily evaluable pipe characteristics. A simple procedure is then proposed for the characterization of the viscoelastic parameters that can be used by manufacturers and technicians. Considering the limits of such a model, the procedure has to be considered as a first step for the characterization of the viscoelastic parameters of more complex models.

    ]]>
    On the Characterization of Viscoelastic Parameters of Polymeric Pipes for Transient Flow Analysis Giuseppe Pezzinga doi: 10.3390/modelling4020016 Modelling 2023-06-20 Modelling 2023-06-20 4 2
    Article
    283 10.3390/modelling4020016 https://www.mdpi.com/2673-3951/4/2/16
    Modelling, Vol. 4, Pages 264-282: Modeling the Global Annual Carbon Footprint for the Transportation Sector and a Path to Sustainability https://www.mdpi.com/2673-3951/4/2/15 The transportation industry’s transition to carbon neutrality is essential for addressing sustainability concerns. This study details a model for calculating the carbon footprint of the transportation sector as it progresses towards carbon neutrality. The model aims to support policymakers in estimating the potential impact of various decisions regarding transportation technology and infrastructure. It accounts for energy demand, technological advancements, and infrastructure upgrades as they relate to each transportation market: passenger vehicles, commercial vehicles, aircraft, watercraft, and trains. A technology roadmap underlies this model, outlining anticipated advancements in batteries, hydrogen storage, biofuels, renewable grid electricity, and carbon capture and sequestration. By estimating the demand and the technologies that comprise each transportation market, the model estimates carbon emissions. Results indicate that based on the technology roadmap, carbon neutrality can be achieved by 2070 for the transportation sector. Furthermore, the model found that carbon neutrality can still be achieved with slippage in the technology development schedule; however, delays in infrastructure updates will delay carbon neutrality, while resulting in a substantial increase in the cumulative carbon footprint of the transportation sector. 2023-06-15 Modelling, Vol. 4, Pages 264-282: Modeling the Global Annual Carbon Footprint for the Transportation Sector and a Path to Sustainability

    Modelling doi: 10.3390/modelling4020015

    Authors: Vikram Mittal Rajesh Shah

    The transportation industry’s transition to carbon neutrality is essential for addressing sustainability concerns. This study details a model for calculating the carbon footprint of the transportation sector as it progresses towards carbon neutrality. The model aims to support policymakers in estimating the potential impact of various decisions regarding transportation technology and infrastructure. It accounts for energy demand, technological advancements, and infrastructure upgrades as they relate to each transportation market: passenger vehicles, commercial vehicles, aircraft, watercraft, and trains. A technology roadmap underlies this model, outlining anticipated advancements in batteries, hydrogen storage, biofuels, renewable grid electricity, and carbon capture and sequestration. By estimating the demand and the technologies that comprise each transportation market, the model estimates carbon emissions. Results indicate that based on the technology roadmap, carbon neutrality can be achieved by 2070 for the transportation sector. Furthermore, the model found that carbon neutrality can still be achieved with slippage in the technology development schedule; however, delays in infrastructure updates will delay carbon neutrality, while resulting in a substantial increase in the cumulative carbon footprint of the transportation sector.

    ]]>
    Modeling the Global Annual Carbon Footprint for the Transportation Sector and a Path to Sustainability Vikram Mittal Rajesh Shah doi: 10.3390/modelling4020015 Modelling 2023-06-15 Modelling 2023-06-15 4 2
    Article
    264 10.3390/modelling4020015 https://www.mdpi.com/2673-3951/4/2/15
    Modelling, Vol. 4, Pages 251-263: Business Process Management Analysis with Cost Information in Public Organizations: A Case Study at an Academic Library https://www.mdpi.com/2673-3951/4/2/14 Public organizations must provide high-quality services at a lower cost. In order to accomplish this goal, they need to apply well accepted cost methods and evaluate the efficiency of their processes using Business Process Management (BPM). However, only a few studies have evaluated the addition of cost information to a process model in a public organization. The aim of the research is to evaluate the combination of cost data to process modeling in an academic library. Our research suggests a new and easy to implement process analysis in three phases. We have combined qualitative (i.e., interviews with the library staff) and quantitative research methods (i.e., estimation of time and cost for each activity and process) to model two important processes of the academic library of the University of Macedonia (UoM). We have modeled the lending and return processes using Business Process Model and Notation (BPMN) in an easy-to-understand format. We have evaluated the costs of each process and sub process with the use of Time-Driven Activity-Based Costing (TDABC) method. The library’s managers found our methodology and results very helpful. Our analysis confirmed that the combination of workflow and cost analysis may significantly improve the decision-making procedure and the efficiency of an organization’s processes. However, we need to further research and evaluate the appropriateness of the combination of various cost and BPM methods in other public organizations. 2023-05-23 Modelling, Vol. 4, Pages 251-263: Business Process Management Analysis with Cost Information in Public Organizations: A Case Study at an Academic Library

    Modelling doi: 10.3390/modelling4020014

    Authors: Barbara Kissa Elias Gounopoulos Maria Kamariotou Fotis Kitsios

    Public organizations must provide high-quality services at a lower cost. In order to accomplish this goal, they need to apply well accepted cost methods and evaluate the efficiency of their processes using Business Process Management (BPM). However, only a few studies have evaluated the addition of cost information to a process model in a public organization. The aim of the research is to evaluate the combination of cost data to process modeling in an academic library. Our research suggests a new and easy to implement process analysis in three phases. We have combined qualitative (i.e., interviews with the library staff) and quantitative research methods (i.e., estimation of time and cost for each activity and process) to model two important processes of the academic library of the University of Macedonia (UoM). We have modeled the lending and return processes using Business Process Model and Notation (BPMN) in an easy-to-understand format. We have evaluated the costs of each process and sub process with the use of Time-Driven Activity-Based Costing (TDABC) method. The library’s managers found our methodology and results very helpful. Our analysis confirmed that the combination of workflow and cost analysis may significantly improve the decision-making procedure and the efficiency of an organization’s processes. However, we need to further research and evaluate the appropriateness of the combination of various cost and BPM methods in other public organizations.

    ]]>
    Business Process Management Analysis with Cost Information in Public Organizations: A Case Study at an Academic Library Barbara Kissa Elias Gounopoulos Maria Kamariotou Fotis Kitsios doi: 10.3390/modelling4020014 Modelling 2023-05-23 Modelling 2023-05-23 4 2
    Article
    251 10.3390/modelling4020014 https://www.mdpi.com/2673-3951/4/2/14
    Modelling, Vol. 4, Pages 224-250: Optimising Maintenance Workflows in Healthcare Facilities: A Multi-Scenario Discrete Event Simulation and Simulation Annealing Approach https://www.mdpi.com/2673-3951/4/2/13 Healthcare systems in low-resource settings need effective methods for managing their scant resources, especially people and equipment. Digital technologies may provide means for circumventing the constraints hindering low-income economies from improving their healthcare services. Although analytical and simulation techniques, such as queuing theory and discrete event simulation, have already been successfully applied in addressing various optimisation problems across different operational contexts, the literature reveals that their application in optimisation of healthcare maintenance systems remains relatively unexplored. This study considers the problem of maintenance workflow optimisation with respect to labour, equipment availability and cost. The study aims to provide objective means for forecasting resource demand, given a set of task requests with varying priorities and queue characteristics that flow from multiple queues, and in parallel, into the same maintenance process for resolution. The paper presents how discrete event simulation is adopted in combination with simulated annealing to develop a decision-support tool that helps healthcare asset managers leverage operational performance data to project future asset-performance trends objectively, and thereby determine appropriate interventions for optimal performance. The study demonstrates that healthcare facilities can achieve efficiency in a cost-effective manner through tool-generated maintenance strategies, and that any future changes can be expeditiously re-evaluated and addressed. 2023-05-09 Modelling, Vol. 4, Pages 224-250: Optimising Maintenance Workflows in Healthcare Facilities: A Multi-Scenario Discrete Event Simulation and Simulation Annealing Approach

    Modelling doi: 10.3390/modelling4020013

    Authors: Joseph Mwanza Arnesh Telukdarie Tak Igusa

    Healthcare systems in low-resource settings need effective methods for managing their scant resources, especially people and equipment. Digital technologies may provide means for circumventing the constraints hindering low-income economies from improving their healthcare services. Although analytical and simulation techniques, such as queuing theory and discrete event simulation, have already been successfully applied in addressing various optimisation problems across different operational contexts, the literature reveals that their application in optimisation of healthcare maintenance systems remains relatively unexplored. This study considers the problem of maintenance workflow optimisation with respect to labour, equipment availability and cost. The study aims to provide objective means for forecasting resource demand, given a set of task requests with varying priorities and queue characteristics that flow from multiple queues, and in parallel, into the same maintenance process for resolution. The paper presents how discrete event simulation is adopted in combination with simulated annealing to develop a decision-support tool that helps healthcare asset managers leverage operational performance data to project future asset-performance trends objectively, and thereby determine appropriate interventions for optimal performance. The study demonstrates that healthcare facilities can achieve efficiency in a cost-effective manner through tool-generated maintenance strategies, and that any future changes can be expeditiously re-evaluated and addressed.

    ]]>
    Optimising Maintenance Workflows in Healthcare Facilities: A Multi-Scenario Discrete Event Simulation and Simulation Annealing Approach Joseph Mwanza Arnesh Telukdarie Tak Igusa doi: 10.3390/modelling4020013 Modelling 2023-05-09 Modelling 2023-05-09 4 2
    Article
    224 10.3390/modelling4020013 https://www.mdpi.com/2673-3951/4/2/13
    Modelling, Vol. 4, Pages 211-223: Molecular Dynamics Simulations Correlating Mechanical Property Changes of Alumina with Atomic Voids under Triaxial Tension Loading https://www.mdpi.com/2673-3951/4/2/12 The functionalization of nanoporous ceramics for applications in healthcare and defence necessitates the study of the effects of geometric structures on their fundamental mechanical properties. However, there is a lack of research on their stiffness and fracture strength along diverse directions under multi-axial loading conditions, particularly with the existence of typical voids in the models. In this study, accurate atomic models and corresponding properties were meticulously selected and validated for further investigation. Comparisons were made between typical material geometric and elastic properties with measured results to ensure the reliability of the selected models. The mechanical behavior of nanoporous alumina under multiaxial stretching was explored through molecular dynamics simulations. The results indicated that the stiffness of nanoporous alumina ceramics under uniaxial tension was greater, while the fracture strength was lower compared to that under multiaxial loading. The fracture of nanoporous ceramics under multi-axial stretching, was mainly dominated by void and crack extension, atomic bond fracture, and cracking with different orientations. Furthermore, the effects of increasing strain rates on the void volume fraction were found to be similar across different initial radii. It was also found that the increasing tension loading rates had greater effects on decreasing the fracture strain. These findings provide additional insight into the fracture mechanisms of nanoporous ceramics under complex loading states, which can also contribute to the development of higher-scale models in the future. 2023-05-05 Modelling, Vol. 4, Pages 211-223: Molecular Dynamics Simulations Correlating Mechanical Property Changes of Alumina with Atomic Voids under Triaxial Tension Loading

    Modelling doi: 10.3390/modelling4020012

    Authors: Junhao Chang Zengtao Chen James D. Hogan

    The functionalization of nanoporous ceramics for applications in healthcare and defence necessitates the study of the effects of geometric structures on their fundamental mechanical properties. However, there is a lack of research on their stiffness and fracture strength along diverse directions under multi-axial loading conditions, particularly with the existence of typical voids in the models. In this study, accurate atomic models and corresponding properties were meticulously selected and validated for further investigation. Comparisons were made between typical material geometric and elastic properties with measured results to ensure the reliability of the selected models. The mechanical behavior of nanoporous alumina under multiaxial stretching was explored through molecular dynamics simulations. The results indicated that the stiffness of nanoporous alumina ceramics under uniaxial tension was greater, while the fracture strength was lower compared to that under multiaxial loading. The fracture of nanoporous ceramics under multi-axial stretching, was mainly dominated by void and crack extension, atomic bond fracture, and cracking with different orientations. Furthermore, the effects of increasing strain rates on the void volume fraction were found to be similar across different initial radii. It was also found that the increasing tension loading rates had greater effects on decreasing the fracture strain. These findings provide additional insight into the fracture mechanisms of nanoporous ceramics under complex loading states, which can also contribute to the development of higher-scale models in the future.

    ]]>
    Molecular Dynamics Simulations Correlating Mechanical Property Changes of Alumina with Atomic Voids under Triaxial Tension Loading Junhao Chang Zengtao Chen James D. Hogan doi: 10.3390/modelling4020012 Modelling 2023-05-05 Modelling 2023-05-05 4 2
    Article
    211 10.3390/modelling4020012 https://www.mdpi.com/2673-3951/4/2/12
    Modelling, Vol. 4, Pages 189-210: Development and Validation of a LabVIEW Automated Software System for Displacement and Dynamic Modal Parameters Analysis Purposes https://www.mdpi.com/2673-3951/4/2/11 The structural health monitoring (SHM) technique is a highly competent operative process dedicated to improving the resilience of an infrastructure by evaluating its system state. SHM is performed to identify any modification in the dynamic properties of an infrastructure by evaluating the acceleration, natural frequencies, and damping ratios. Apart from the vibrational measurements, SHM is employed to assess the displacement. Consequently, sensors are mounted on the investigated framework aiming to collect frequent readings at regularly spaced time intervals during and after being induced. In this study, a LabVIEW program was developed for vibrational monitoring and system evaluation. In a case study reported herein, it calculates the natural frequencies as well as the damping and displacement parameters of a cantilever steel beam after being subjected to excitation at its free end. For that purpose, a Bridge Diagnostic Inc. (BDI) accelerometer and a displacement transducer were parallelly mounted on the free end of the beam. The developed program was capable of detecting the eigenfrequencies, the damping properties, and the displacements from the acceleration data. The evaluated parameters were estimated with the ARTeMIS modal analysis software for comparison purposes. The reported response confirmed that the proposed system strongly conducted the desired performance as it successfully identified the system state and modal parameters. 2023-04-28 Modelling, Vol. 4, Pages 189-210: Development and Validation of a LabVIEW Automated Software System for Displacement and Dynamic Modal Parameters Analysis Purposes

    Modelling doi: 10.3390/modelling4020011

    Authors: Reina El Dahr Xenofon Lignos Spyridon Papavieros Ioannis Vayas

    The structural health monitoring (SHM) technique is a highly competent operative process dedicated to improving the resilience of an infrastructure by evaluating its system state. SHM is performed to identify any modification in the dynamic properties of an infrastructure by evaluating the acceleration, natural frequencies, and damping ratios. Apart from the vibrational measurements, SHM is employed to assess the displacement. Consequently, sensors are mounted on the investigated framework aiming to collect frequent readings at regularly spaced time intervals during and after being induced. In this study, a LabVIEW program was developed for vibrational monitoring and system evaluation. In a case study reported herein, it calculates the natural frequencies as well as the damping and displacement parameters of a cantilever steel beam after being subjected to excitation at its free end. For that purpose, a Bridge Diagnostic Inc. (BDI) accelerometer and a displacement transducer were parallelly mounted on the free end of the beam. The developed program was capable of detecting the eigenfrequencies, the damping properties, and the displacements from the acceleration data. The evaluated parameters were estimated with the ARTeMIS modal analysis software for comparison purposes. The reported response confirmed that the proposed system strongly conducted the desired performance as it successfully identified the system state and modal parameters.

    ]]>
    Development and Validation of a LabVIEW Automated Software System for Displacement and Dynamic Modal Parameters Analysis Purposes Reina El Dahr Xenofon Lignos Spyridon Papavieros Ioannis Vayas doi: 10.3390/modelling4020011 Modelling 2023-04-28 Modelling 2023-04-28 4 2
    Article
    189 10.3390/modelling4020011 https://www.mdpi.com/2673-3951/4/2/11
    Modelling, Vol. 4, Pages 168-188: Manuscripts Character Recognition Using Machine Learning and Deep Learning https://www.mdpi.com/2673-3951/4/2/10 The automatic character recognition of historic documents gained more attention from scholars recently, due to the big improvements in computer vision, image processing, and digitization. While Neural Networks, the current state-of-the-art models used for image recognition, are very performant, they typically suffer from using large amounts of training data. In our study we manually built our own relatively small dataset of 404 characters by cropping letter images from a popular historic manuscript, the Electronic Beowulf. To compensate for the small dataset we use ImageDataGenerator, a Python library was used to augment our Beowulf manuscript’s dataset. The training dataset was augmented once, twice, and thrice, which we call resampling 1, resampling 2, and resampling 3, respectively. To classify the manuscript’s character images efficiently, we developed a customized Convolutional Neural Network (CNN) model. We conducted a comparative analysis of the results achieved by our proposed model with other machine learning (ML) models such as support vector machine (SVM), K-nearest neighbor (KNN), decision tree (DT), random forest (RF), and XGBoost. We used pretrained models such as VGG16, MobileNet, and ResNet50 to extract features from character images. We then trained and tested the above ML models and recorded the results. Moreover, we validated our proposed CNN model against the well-established MNIST dataset. Our proposed CNN model achieves very good recognition accuracies of 88.67%, 90.91%, and 98.86% in the cases of resampling 1, resampling 2, and resampling 3, respectively, for the Beowulf manuscript’s data. Additionally, our CNN model achieves the benchmark recognition accuracy of 99.03% for the MNIST dataset. 2023-04-04 Modelling, Vol. 4, Pages 168-188: Manuscripts Character Recognition Using Machine Learning and Deep Learning

    Modelling doi: 10.3390/modelling4020010

    Authors: Mohammad Anwarul Islam Ionut E. Iacob

    The automatic character recognition of historic documents gained more attention from scholars recently, due to the big improvements in computer vision, image processing, and digitization. While Neural Networks, the current state-of-the-art models used for image recognition, are very performant, they typically suffer from using large amounts of training data. In our study we manually built our own relatively small dataset of 404 characters by cropping letter images from a popular historic manuscript, the Electronic Beowulf. To compensate for the small dataset we use ImageDataGenerator, a Python library was used to augment our Beowulf manuscript’s dataset. The training dataset was augmented once, twice, and thrice, which we call resampling 1, resampling 2, and resampling 3, respectively. To classify the manuscript’s character images efficiently, we developed a customized Convolutional Neural Network (CNN) model. We conducted a comparative analysis of the results achieved by our proposed model with other machine learning (ML) models such as support vector machine (SVM), K-nearest neighbor (KNN), decision tree (DT), random forest (RF), and XGBoost. We used pretrained models such as VGG16, MobileNet, and ResNet50 to extract features from character images. We then trained and tested the above ML models and recorded the results. Moreover, we validated our proposed CNN model against the well-established MNIST dataset. Our proposed CNN model achieves very good recognition accuracies of 88.67%, 90.91%, and 98.86% in the cases of resampling 1, resampling 2, and resampling 3, respectively, for the Beowulf manuscript’s data. Additionally, our CNN model achieves the benchmark recognition accuracy of 99.03% for the MNIST dataset.

    ]]>
    Manuscripts Character Recognition Using Machine Learning and Deep Learning Mohammad Anwarul Islam Ionut E. Iacob doi: 10.3390/modelling4020010 Modelling 2023-04-04 Modelling 2023-04-04 4 2
    Article
    168 10.3390/modelling4020010 https://www.mdpi.com/2673-3951/4/2/10
    Modelling, Vol. 4, Pages 133-167: Traceability Management of Socio-Cyber-Physical Systems Involving Goal and SysML Models https://www.mdpi.com/2673-3951/4/2/9 Socio-cyber-physical systems (SCPSs) have emerged as networked heterogeneous systems that incorporate social components (e.g., business processes and social networks) along with physical (e.g., Internet-of-Things devices) and software components. Model-driven techniques for building SCPSs need actor and goal models to capture social concerns, whereas system issues are often addressed with the Systems Modeling Language (SysML). Comprehensive traceability between these types of models is essential to support consistency and completeness checks, change management, and impact analysis. However, traceability management between these complementary views is not well supported across SysML tools, particularly when models evolve because SysML does not provide sophisticated out-of-the-box goal modeling capabilities. In our previous work, we proposed a model-based framework, called CGS4Adaptation, that supports basic traceability by importing goal and SysML models into a leading third-party requirement-management system, namely IBM Rational DOORS. In this paper, we present the framework’s traceability management method and its use for automated consistency and completeness checks. Traceability management also includes implicit link detection, thereby, improving the quality of traceability links while better aligning designs with requirements. The method is evaluated using an adaptive SCPS case study involving an IoT-based smart home. The results suggest that the tool-supported method is effective and useful in supporting the traceability management process involving complex goal and SysML models in one environment while saving development time and effort. 2023-03-30 Modelling, Vol. 4, Pages 133-167: Traceability Management of Socio-Cyber-Physical Systems Involving Goal and SysML Models

    Modelling doi: 10.3390/modelling4020009

    Authors: Amal Ahmed Anda Daniel Amyot John Mylopoulos

    Socio-cyber-physical systems (SCPSs) have emerged as networked heterogeneous systems that incorporate social components (e.g., business processes and social networks) along with physical (e.g., Internet-of-Things devices) and software components. Model-driven techniques for building SCPSs need actor and goal models to capture social concerns, whereas system issues are often addressed with the Systems Modeling Language (SysML). Comprehensive traceability between these types of models is essential to support consistency and completeness checks, change management, and impact analysis. However, traceability management between these complementary views is not well supported across SysML tools, particularly when models evolve because SysML does not provide sophisticated out-of-the-box goal modeling capabilities. In our previous work, we proposed a model-based framework, called CGS4Adaptation, that supports basic traceability by importing goal and SysML models into a leading third-party requirement-management system, namely IBM Rational DOORS. In this paper, we present the framework’s traceability management method and its use for automated consistency and completeness checks. Traceability management also includes implicit link detection, thereby, improving the quality of traceability links while better aligning designs with requirements. The method is evaluated using an adaptive SCPS case study involving an IoT-based smart home. The results suggest that the tool-supported method is effective and useful in supporting the traceability management process involving complex goal and SysML models in one environment while saving development time and effort.

    ]]>
    Traceability Management of Socio-Cyber-Physical Systems Involving Goal and SysML Models Amal Ahmed Anda Daniel Amyot John Mylopoulos doi: 10.3390/modelling4020009 Modelling 2023-03-30 Modelling 2023-03-30 4 2
    Article
    133 10.3390/modelling4020009 https://www.mdpi.com/2673-3951/4/2/9
    Modelling, Vol. 4, Pages 102-132: Theoretical Advancements on a Few New Dependence Models Based on Copulas with an Original Ratio Form https://www.mdpi.com/2673-3951/4/2/8 Copulas are well-known tools for describing the relationship between two or more quantitative variables. They have recently received a lot of attention, owing to the variable dependence complexity that appears in heterogeneous modern problems. In this paper, we offer five new copulas based on a common original ratio form. All of them are defined with a single tuning parameter, and all reduce to the independence copula when this parameter is equal to zero. Wide admissible domains for this parameter are established, and the mathematical developments primarily rely on non-trivial limits, two-dimensional differentiations, suitable factorizations, and mathematical inequalities. The corresponding functions and characteristics of the proposed copulas are looked at in some important details. In particular, as common features, it is shown that they are diagonally symmetric, but not Archimedean, not radially symmetric, and without tail dependence. The theory is illustrated with numerical tables and graphics. A final part discusses the multi-dimensional variation of our original ratio form. The contributions are primarily theoretical, but they provide the framework for cutting-edge dependence models that have potential applications across a wide range of fields. Some established two-dimensional inequalities may be of interest beyond the purposes of this paper. 2023-03-29 Modelling, Vol. 4, Pages 102-132: Theoretical Advancements on a Few New Dependence Models Based on Copulas with an Original Ratio Form

    Modelling doi: 10.3390/modelling4020008

    Authors: Christophe Chesneau

    Copulas are well-known tools for describing the relationship between two or more quantitative variables. They have recently received a lot of attention, owing to the variable dependence complexity that appears in heterogeneous modern problems. In this paper, we offer five new copulas based on a common original ratio form. All of them are defined with a single tuning parameter, and all reduce to the independence copula when this parameter is equal to zero. Wide admissible domains for this parameter are established, and the mathematical developments primarily rely on non-trivial limits, two-dimensional differentiations, suitable factorizations, and mathematical inequalities. The corresponding functions and characteristics of the proposed copulas are looked at in some important details. In particular, as common features, it is shown that they are diagonally symmetric, but not Archimedean, not radially symmetric, and without tail dependence. The theory is illustrated with numerical tables and graphics. A final part discusses the multi-dimensional variation of our original ratio form. The contributions are primarily theoretical, but they provide the framework for cutting-edge dependence models that have potential applications across a wide range of fields. Some established two-dimensional inequalities may be of interest beyond the purposes of this paper.

    ]]>
    Theoretical Advancements on a Few New Dependence Models Based on Copulas with an Original Ratio Form Christophe Chesneau doi: 10.3390/modelling4020008 Modelling 2023-03-29 Modelling 2023-03-29 4 2
    Article
    102 10.3390/modelling4020008 https://www.mdpi.com/2673-3951/4/2/8
    Modelling, Vol. 4, Pages 87-101: Hybrid Finite-Discrete Element Modeling of the Mode I Tensile Response of an Alumina Ceramic https://www.mdpi.com/2673-3951/4/1/7 We have developed a three-dimensional hybrid finite-discrete element model to investigate the mode I tensile opening failure of alumina ceramic. This model implicitly considers the flaw system in the material and explicitly shows the macroscopic failure patterns. A single main crack perpendicular to the loading direction is observed during the tensile loading simulation. Some fragments appear near the crack surfaces due to crack branching. The tensile strength obtained by our model is consistent with the experimental results from the literature. Once validated with the literature, the influences of the distribution of the flaw system on the tensile strength and elastic modulus are explored. The simulation results show that the material with more uniform flaw sizes and fewer big flaws has stronger tensile strength and higher elastic modulus. 2023-03-13 Modelling, Vol. 4, Pages 87-101: Hybrid Finite-Discrete Element Modeling of the Mode I Tensile Response of an Alumina Ceramic

    Modelling doi: 10.3390/modelling4010007

    Authors: Jie Zheng Haoyang Li James D. Hogan

    We have developed a three-dimensional hybrid finite-discrete element model to investigate the mode I tensile opening failure of alumina ceramic. This model implicitly considers the flaw system in the material and explicitly shows the macroscopic failure patterns. A single main crack perpendicular to the loading direction is observed during the tensile loading simulation. Some fragments appear near the crack surfaces due to crack branching. The tensile strength obtained by our model is consistent with the experimental results from the literature. Once validated with the literature, the influences of the distribution of the flaw system on the tensile strength and elastic modulus are explored. The simulation results show that the material with more uniform flaw sizes and fewer big flaws has stronger tensile strength and higher elastic modulus.

    ]]>
    Hybrid Finite-Discrete Element Modeling of the Mode I Tensile Response of an Alumina Ceramic Jie Zheng Haoyang Li James D. Hogan doi: 10.3390/modelling4010007 Modelling 2023-03-13 Modelling 2023-03-13 4 1
    Article
    87 10.3390/modelling4010007 https://www.mdpi.com/2673-3951/4/1/7
    Modelling, Vol. 4, Pages 70-86: Nonlinear Modeling of an Automotive Air Conditioning System Considering Active Grille Shutters https://www.mdpi.com/2673-3951/4/1/6 This paper expands upon the state of the art in nonlinear modeling of automotive air conditioning systems. Prior models considered only the effects of the refrigerant compressor and the condenser fan. There are two new aspects included here. First, we create a mathematical model for front-end underhood airflow, considering vehicle speed, condenser fan rotational speed, and active grille shutter position. In addition, we present a new model for the power consumption of the vehicle associated with aerodynamic drag caused by underhood flow, as well as a fan power model which accounts not only for changes in rotational speed but also changes in flow rate. The models developed in this paper are coded in MATLAB/Simulink and assessed for various vehicle driving conditions against a higher-fidelity vehicle energy management model, showing good agreement. By including the active grille shutters as a controllable actuator and the impact of underhood flow on vehicle drag and fan power consumption, control schemes can be developed to holistically target reduced energy consumption for the air conditioning system and, thus, improve the overall vehicle energy efficiency. 2023-02-02 Modelling, Vol. 4, Pages 70-86: Nonlinear Modeling of an Automotive Air Conditioning System Considering Active Grille Shutters

    Modelling doi: 10.3390/modelling4010006

    Authors: Trevor Parent Jeffrey J. Defoe Afshin Rahimi

    This paper expands upon the state of the art in nonlinear modeling of automotive air conditioning systems. Prior models considered only the effects of the refrigerant compressor and the condenser fan. There are two new aspects included here. First, we create a mathematical model for front-end underhood airflow, considering vehicle speed, condenser fan rotational speed, and active grille shutter position. In addition, we present a new model for the power consumption of the vehicle associated with aerodynamic drag caused by underhood flow, as well as a fan power model which accounts not only for changes in rotational speed but also changes in flow rate. The models developed in this paper are coded in MATLAB/Simulink and assessed for various vehicle driving conditions against a higher-fidelity vehicle energy management model, showing good agreement. By including the active grille shutters as a controllable actuator and the impact of underhood flow on vehicle drag and fan power consumption, control schemes can be developed to holistically target reduced energy consumption for the air conditioning system and, thus, improve the overall vehicle energy efficiency.

    ]]>
    Nonlinear Modeling of an Automotive Air Conditioning System Considering Active Grille Shutters Trevor Parent Jeffrey J. Defoe Afshin Rahimi doi: 10.3390/modelling4010006 Modelling 2023-02-02 Modelling 2023-02-02 4 1
    Article
    70 10.3390/modelling4010006 https://www.mdpi.com/2673-3951/4/1/6
    Modelling, Vol. 4, Pages 56-69: Off-Design Analysis Method for Compressor Fouling Fault Diagnosis of Helicopter Turboshaft Engine https://www.mdpi.com/2673-3951/4/1/5 Fouling, caused by the adhesion of fine materials to the blades of the compressor’s last stages, changes the airfoil’s shape and function and the inlet flow angle on the blades. As the fouling increases, the range of influence increases, and the mass flow rate and overall engine efficiency reduce. Therefore, the compressor is choked at lower speeds. This study aims to simulate compressor performance during off-design conditions due to fouling and to present an approach for modeling faults in diagnostic and health monitoring systems. A computational fluid dynamics analysis is carried out to evaluate the proposed method on General Electric’s T700-GE turboshaft engine, and the performance is evaluated at different flight conditions. The results show promising outcomes with an average accuracy of 88% that would help future turboshaft health monitoring systems. 2023-01-28 Modelling, Vol. 4, Pages 56-69: Off-Design Analysis Method for Compressor Fouling Fault Diagnosis of Helicopter Turboshaft Engine

    Modelling doi: 10.3390/modelling4010005

    Authors: Farshid Bazmi Afshin Rahimi

    Fouling, caused by the adhesion of fine materials to the blades of the compressor’s last stages, changes the airfoil’s shape and function and the inlet flow angle on the blades. As the fouling increases, the range of influence increases, and the mass flow rate and overall engine efficiency reduce. Therefore, the compressor is choked at lower speeds. This study aims to simulate compressor performance during off-design conditions due to fouling and to present an approach for modeling faults in diagnostic and health monitoring systems. A computational fluid dynamics analysis is carried out to evaluate the proposed method on General Electric’s T700-GE turboshaft engine, and the performance is evaluated at different flight conditions. The results show promising outcomes with an average accuracy of 88% that would help future turboshaft health monitoring systems.

    ]]>
    Off-Design Analysis Method for Compressor Fouling Fault Diagnosis of Helicopter Turboshaft Engine Farshid Bazmi Afshin Rahimi doi: 10.3390/modelling4010005 Modelling 2023-01-28 Modelling 2023-01-28 4 1
    Article
    56 10.3390/modelling4010005 https://www.mdpi.com/2673-3951/4/1/5
    Modelling, Vol. 4, Pages 37-55: Machine Learning Methods for Diabetes Prevalence Classification in Saudi Arabia https://www.mdpi.com/2673-3951/4/1/4 Machine learning algorithms have been widely used in public health for predicting or diagnosing epidemiological chronic diseases, such as diabetes mellitus, which is classified as an epi-demic due to its high rates of global prevalence. Machine learning techniques are useful for the processes of description, prediction, and evaluation of various diseases, including diabetes. This study investigates the ability of different classification methods to classify diabetes prevalence rates and the predicted trends in the disease according to associated behavioural risk factors (smoking, obesity, and inactivity) in Saudi Arabia. Classification models for diabetes prevalence were developed using different machine learning algorithms, including linear discriminant (LD), support vector machine (SVM), K -nearest neighbour (KNN), and neural network pattern recognition (NPR). Four kernel functions of SVM and two types of KNN algorithms were used, namely linear SVM, Gaussian SVM, quadratic SVM, cubic SVM, fine KNN, and weighted KNN. The performance evaluation in terms of the accuracy of each developed model was determined, and the developed classifiers were compared using the Classification Learner App in MATLAB, according to prediction speed and training time. The experimental results on the predictive performance analysis of the classification models showed that weighted KNN performed well in the prediction of diabetes prevalence rate, with the highest average accuracy of 94.5% and less training time than the other classification methods, for both men and women datasets. 2023-01-25 Modelling, Vol. 4, Pages 37-55: Machine Learning Methods for Diabetes Prevalence Classification in Saudi Arabia

    Modelling doi: 10.3390/modelling4010004

    Authors: Entissar S. Almutairi Maysam F. Abbod

    Machine learning algorithms have been widely used in public health for predicting or diagnosing epidemiological chronic diseases, such as diabetes mellitus, which is classified as an epi-demic due to its high rates of global prevalence. Machine learning techniques are useful for the processes of description, prediction, and evaluation of various diseases, including diabetes. This study investigates the ability of different classification methods to classify diabetes prevalence rates and the predicted trends in the disease according to associated behavioural risk factors (smoking, obesity, and inactivity) in Saudi Arabia. Classification models for diabetes prevalence were developed using different machine learning algorithms, including linear discriminant (LD), support vector machine (SVM), K -nearest neighbour (KNN), and neural network pattern recognition (NPR). Four kernel functions of SVM and two types of KNN algorithms were used, namely linear SVM, Gaussian SVM, quadratic SVM, cubic SVM, fine KNN, and weighted KNN. The performance evaluation in terms of the accuracy of each developed model was determined, and the developed classifiers were compared using the Classification Learner App in MATLAB, according to prediction speed and training time. The experimental results on the predictive performance analysis of the classification models showed that weighted KNN performed well in the prediction of diabetes prevalence rate, with the highest average accuracy of 94.5% and less training time than the other classification methods, for both men and women datasets.

    ]]>
    Machine Learning Methods for Diabetes Prevalence Classification in Saudi Arabia Entissar S. Almutairi Maysam F. Abbod doi: 10.3390/modelling4010004 Modelling 2023-01-25 Modelling 2023-01-25 4 1
    Article
    37 10.3390/modelling4010004 https://www.mdpi.com/2673-3951/4/1/4
    Modelling, Vol. 4, Pages 35-36: Acknowledgment to the Reviewers of Modelling in 2022 https://www.mdpi.com/2673-3951/4/1/3 High-quality academic publishing is built on rigorous peer review [...] 2023-01-18 Modelling, Vol. 4, Pages 35-36: Acknowledgment to the Reviewers of Modelling in 2022

    Modelling doi: 10.3390/modelling4010003

    Authors: Modelling Editorial Office Modelling Editorial Office

    High-quality academic publishing is built on rigorous peer review [...]

    ]]>
    Acknowledgment to the Reviewers of Modelling in 2022 Modelling Editorial Office Modelling Editorial Office doi: 10.3390/modelling4010003 Modelling 2023-01-18 Modelling 2023-01-18 4 1
    Editorial
    35 10.3390/modelling4010003 https://www.mdpi.com/2673-3951/4/1/3
    Modelling, Vol. 4, Pages 19-34: IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis https://www.mdpi.com/2673-3951/4/1/2 Decision making as a result of system dynamics analysis requires, in practice, a straightforward and systematic modeling capability as well as a high-level of customization and flexibility to adapt to situations and environments that may vary very much from each other. While in general terms a completely generic approach could be not as effective as ad hoc solutions, the proper application of modern technology may facilitate agile strategies as a result of a smart combination of qualitative and quantitative aspects. In order to address such complexity, we propose a knowledge-based approach that integrates the systematic computation of heterogeneous criteria with open semantics. The holistic understanding of the framework is described by a reference architecture and the proof-of-concept prototype developed can support high-level system analysis, as well as being suitable within a number of applications contexts—i.e., as a research/educational tool, communication framework, gamification and participatory modeling. Additionally, the knowledge-based philosophy, developed upon Semantic Web technology, increases the capability in terms of holistic knowledge building and re-use via interoperability. Last but not least, the framework is designed to constantly evolve in the next future, for instance by incorporating more advanced AI-powered features. 2022-12-23 Modelling, Vol. 4, Pages 19-34: IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis

    Modelling doi: 10.3390/modelling4010002

    Authors: Salvatore Flavio Pileggi

    Decision making as a result of system dynamics analysis requires, in practice, a straightforward and systematic modeling capability as well as a high-level of customization and flexibility to adapt to situations and environments that may vary very much from each other. While in general terms a completely generic approach could be not as effective as ad hoc solutions, the proper application of modern technology may facilitate agile strategies as a result of a smart combination of qualitative and quantitative aspects. In order to address such complexity, we propose a knowledge-based approach that integrates the systematic computation of heterogeneous criteria with open semantics. The holistic understanding of the framework is described by a reference architecture and the proof-of-concept prototype developed can support high-level system analysis, as well as being suitable within a number of applications contexts—i.e., as a research/educational tool, communication framework, gamification and participatory modeling. Additionally, the knowledge-based philosophy, developed upon Semantic Web technology, increases the capability in terms of holistic knowledge building and re-use via interoperability. Last but not least, the framework is designed to constantly evolve in the next future, for instance by incorporating more advanced AI-powered features.

    ]]>
    IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis Salvatore Flavio Pileggi doi: 10.3390/modelling4010002 Modelling 2022-12-23 Modelling 2022-12-23 4 1
    Article
    19 10.3390/modelling4010002 https://www.mdpi.com/2673-3951/4/1/2
    Modelling, Vol. 4, Pages 1-18: Damage Evolution Prediction during 2D Scale-Model Tests of a Rubble-Mound Breakwater: A Case Study of Ericeira’s Breakwater https://www.mdpi.com/2673-3951/4/1/1 Melby presents a formula to predict damage evolution in rubble-mound breakwaters whose armour layer is made of rock, based on the erosion measured in scale-model tests and the characteristics of the incident sea waves in such tests. However, this formula is only valid for armour layers made of rock and for the range of tested sea states. The present work aims to show how the Melby methodology can be used to establish a similar formula for the armour layer damage evolution in a rubble-mound breakwater where tetrapods are employed. For that, a long-duration test series is conducted with a 1:50 scale model of the quay section of the Ericeira Harbour breakwater. The eroded volume of the armour layer was measured using a Kinect position sensor. The damage parameter values measured in the experiments are lower than those predicted by the formulation for rock armour layers. New ap and b coefficients for the Melby formula for the tested armour layer were established based on the minimum root mean square error between the measured and the predicted damage. This work shows also that it is possible to assess the damage evolution in scale-model tests with rubble-mound breakwaters by computing the eroded volume and subsequently, the dimensionless damage parameter based on the equivalent removed armour units. 2022-12-20 Modelling, Vol. 4, Pages 1-18: Damage Evolution Prediction during 2D Scale-Model Tests of a Rubble-Mound Breakwater: A Case Study of Ericeira’s Breakwater

    Modelling doi: 10.3390/modelling4010001

    Authors: Rute Lemos João A. Santos Conceição J.E.M. Fortes

    Melby presents a formula to predict damage evolution in rubble-mound breakwaters whose armour layer is made of rock, based on the erosion measured in scale-model tests and the characteristics of the incident sea waves in such tests. However, this formula is only valid for armour layers made of rock and for the range of tested sea states. The present work aims to show how the Melby methodology can be used to establish a similar formula for the armour layer damage evolution in a rubble-mound breakwater where tetrapods are employed. For that, a long-duration test series is conducted with a 1:50 scale model of the quay section of the Ericeira Harbour breakwater. The eroded volume of the armour layer was measured using a Kinect position sensor. The damage parameter values measured in the experiments are lower than those predicted by the formulation for rock armour layers. New ap and b coefficients for the Melby formula for the tested armour layer were established based on the minimum root mean square error between the measured and the predicted damage. This work shows also that it is possible to assess the damage evolution in scale-model tests with rubble-mound breakwaters by computing the eroded volume and subsequently, the dimensionless damage parameter based on the equivalent removed armour units.

    ]]>
    Damage Evolution Prediction during 2D Scale-Model Tests of a Rubble-Mound Breakwater: A Case Study of Ericeira’s Breakwater Rute Lemos João A. Santos Conceição J.E.M. Fortes doi: 10.3390/modelling4010001 Modelling 2022-12-20 Modelling 2022-12-20 4 1
    Article
    1 10.3390/modelling4010001 https://www.mdpi.com/2673-3951/4/1/1
    Modelling, Vol. 3, Pages 481-498: Empirical Modeling of Transverse Displacements of Single-Sided Transversely Cracked Prismatic Tension Beams https://www.mdpi.com/2673-3951/3/4/31 While the effects of axial compression on beams have long been known, the effect of tensile axial loads on one-sided transversely cracked beams is less known. The crack namely shifts the position of the resultant of the axial normal stresses deeper into the uncracked part of the cross-section, and the crack tends to open, causing a transverse displacement. Therefore, this paper focuses on empirical modeling of the considered phenomenon for slender prismatic beams in order to establish a suitable 1D computational model based on detailed 3D FE mesh results. This goal can be achieved through the already established simplified model, where the crack is represented by an internal hinge endowed with a rotational spring. Several analyses of various beams differing in geometry, crack locations, and boundary conditions were executed by implementing 3D FE meshes to establish the appropriate model’s bending governing differential equation. After that, the corresponding parameter definitions were calibrated from the database of 3D FE models. By redefining the model’s input parameters, a suitable solution is achieved, offering a good balance between the results’ accuracy and the required computational effort. The functionality of the newly obtained solutions was verified through some comparative case studies that supplement the derivations. 2022-12-16 Modelling, Vol. 3, Pages 481-498: Empirical Modeling of Transverse Displacements of Single-Sided Transversely Cracked Prismatic Tension Beams

    Modelling doi: 10.3390/modelling3040031

    Authors: Matjaž Skrinar

    While the effects of axial compression on beams have long been known, the effect of tensile axial loads on one-sided transversely cracked beams is less known. The crack namely shifts the position of the resultant of the axial normal stresses deeper into the uncracked part of the cross-section, and the crack tends to open, causing a transverse displacement. Therefore, this paper focuses on empirical modeling of the considered phenomenon for slender prismatic beams in order to establish a suitable 1D computational model based on detailed 3D FE mesh results. This goal can be achieved through the already established simplified model, where the crack is represented by an internal hinge endowed with a rotational spring. Several analyses of various beams differing in geometry, crack locations, and boundary conditions were executed by implementing 3D FE meshes to establish the appropriate model’s bending governing differential equation. After that, the corresponding parameter definitions were calibrated from the database of 3D FE models. By redefining the model’s input parameters, a suitable solution is achieved, offering a good balance between the results’ accuracy and the required computational effort. The functionality of the newly obtained solutions was verified through some comparative case studies that supplement the derivations.

    ]]>
    Empirical Modeling of Transverse Displacements of Single-Sided Transversely Cracked Prismatic Tension Beams Matjaž Skrinar doi: 10.3390/modelling3040031 Modelling 2022-12-16 Modelling 2022-12-16 3 4
    Article
    481 10.3390/modelling3040031 https://www.mdpi.com/2673-3951/3/4/31
    Modelling, Vol. 3, Pages 464-480: Efficient Hydrodynamic Modelling of Urban Stormwater Systems for Real-Time Applications https://www.mdpi.com/2673-3951/3/4/30 Urban water drainage systems represent complex networks with nonlinear dynamics and different types of interactions. This yields an involved modeling problem for which different off-line simulation approaches are available. Nevertheless, these approaches cannot be used for real-time simulations, i.e., running in parallel to weather now- and forecasts and enabling the monitoring and automatic control of urban water drainage systems. Alternative approaches, used commonly for automation purposes, involve parameterized linear delay systems, which can be used in real-time but lack the necessary level of detail, which, in particular, is required for adequate flood risk prognostics. Given this setup, in the present paper, an approach for the effective modeling of detailed water drainage systems for real-time applications implemented with the open-source Storm Water Management Model (SWMM) software is addressed and exemplified for a part of the water drainage system of the city of Flensburg in northern Germany. Additionally, a freely available early-warning system prototype is introduced and used to combine weather forcast information on a 2-h prediction horizon with the developed model and available measurements. This prototype is subsequently used for data assimilation using the ensemble Kalman filter (EnKF) for the considered area in Flensburg. 2022-11-17 Modelling, Vol. 3, Pages 464-480: Efficient Hydrodynamic Modelling of Urban Stormwater Systems for Real-Time Applications

    Modelling doi: 10.3390/modelling3040030

    Authors: Henry Baumann Nanna Høegh Ravn Alexander Schaum

    Urban water drainage systems represent complex networks with nonlinear dynamics and different types of interactions. This yields an involved modeling problem for which different off-line simulation approaches are available. Nevertheless, these approaches cannot be used for real-time simulations, i.e., running in parallel to weather now- and forecasts and enabling the monitoring and automatic control of urban water drainage systems. Alternative approaches, used commonly for automation purposes, involve parameterized linear delay systems, which can be used in real-time but lack the necessary level of detail, which, in particular, is required for adequate flood risk prognostics. Given this setup, in the present paper, an approach for the effective modeling of detailed water drainage systems for real-time applications implemented with the open-source Storm Water Management Model (SWMM) software is addressed and exemplified for a part of the water drainage system of the city of Flensburg in northern Germany. Additionally, a freely available early-warning system prototype is introduced and used to combine weather forcast information on a 2-h prediction horizon with the developed model and available measurements. This prototype is subsequently used for data assimilation using the ensemble Kalman filter (EnKF) for the considered area in Flensburg.

    ]]>
    Efficient Hydrodynamic Modelling of Urban Stormwater Systems for Real-Time Applications Henry Baumann Nanna Høegh Ravn Alexander Schaum doi: 10.3390/modelling3040030 Modelling 2022-11-17 Modelling 2022-11-17 3 4
    Article
    464 10.3390/modelling3040030 https://www.mdpi.com/2673-3951/3/4/30
    Modelling, Vol. 3, Pages 445-463: Mathematical Modeling of Electrical Circuits and Practical Works of Increasing Difficulty with Classical Spreadsheet Software https://www.mdpi.com/2673-3951/3/4/29 This paper presents a modeling practical works project of electrical engineering, proposed to the first-year students of the University Institute of Technology in France, during the COVID-19 pandemic. The objective of this paper is twofold. The first objective is to present to the students the opportunities of modeling and calculation development of a spreadsheet software in their professional lives. The second objective is to create a file that automatically calculates all the current and voltage values at each point of any alternative electrical circuit. The aim of this paper, geared toward students, is to bring them to build their own numerical remote lab, autonomously. Therefore, pedagogical keys are given along the reading of this document to help them to progress, both on electrical circuits conceptual understanding with series and parallel RLC circuits and on their computation in a spreadsheet software. As a conclusion, this paper can be used as a base to develop remote modeling practical works of many and different devices, as well as a database starting point of such analytical models. 2022-11-17 Modelling, Vol. 3, Pages 445-463: Mathematical Modeling of Electrical Circuits and Practical Works of Increasing Difficulty with Classical Spreadsheet Software

    Modelling doi: 10.3390/modelling3040029

    Authors: Christophe Sauvey

    This paper presents a modeling practical works project of electrical engineering, proposed to the first-year students of the University Institute of Technology in France, during the COVID-19 pandemic. The objective of this paper is twofold. The first objective is to present to the students the opportunities of modeling and calculation development of a spreadsheet software in their professional lives. The second objective is to create a file that automatically calculates all the current and voltage values at each point of any alternative electrical circuit. The aim of this paper, geared toward students, is to bring them to build their own numerical remote lab, autonomously. Therefore, pedagogical keys are given along the reading of this document to help them to progress, both on electrical circuits conceptual understanding with series and parallel RLC circuits and on their computation in a spreadsheet software. As a conclusion, this paper can be used as a base to develop remote modeling practical works of many and different devices, as well as a database starting point of such analytical models.

    ]]>
    Mathematical Modeling of Electrical Circuits and Practical Works of Increasing Difficulty with Classical Spreadsheet Software Christophe Sauvey doi: 10.3390/modelling3040029 Modelling 2022-11-17 Modelling 2022-11-17 3 4
    Article
    445 10.3390/modelling3040029 https://www.mdpi.com/2673-3951/3/4/29
    Modelling, Vol. 3, Pages 434-444: Numerical Analysis of the Radial Load, Pressure and Velocity Fields of a Single Blade Pump https://www.mdpi.com/2673-3951/3/4/28 The centrifugal screw-type pump is a type of pump which, due to its hydraulic and mechanical properties, is used in several areas of the industry (e.g., for sludge and rainwater disposal). To avoid impeller passage clogging, the 3D impeller geometry is designed as a helically curved blade added to a conical hub. The passability through the fluid canal of the modelled impeller is 100 mm. In this paper, the magnitude of the radial force on an impeller blade is investigated as a function of the flow rate. The digital model was designed in Catia V5 and calculated using the commercial Ansys CFX software. A numerical computational fluid dynamics (CFD) method was used to investigate the performance characteristics of the pump, specifically discussing internal flow conditions such as velocity, pressure and the radial force mentioned above. 2022-10-25 Modelling, Vol. 3, Pages 434-444: Numerical Analysis of the Radial Load, Pressure and Velocity Fields of a Single Blade Pump

    Modelling doi: 10.3390/modelling3040028

    Authors: Dávid Bleho Róbert Olšiak Branislav Knížat Marek Mlkvik

    The centrifugal screw-type pump is a type of pump which, due to its hydraulic and mechanical properties, is used in several areas of the industry (e.g., for sludge and rainwater disposal). To avoid impeller passage clogging, the 3D impeller geometry is designed as a helically curved blade added to a conical hub. The passability through the fluid canal of the modelled impeller is 100 mm. In this paper, the magnitude of the radial force on an impeller blade is investigated as a function of the flow rate. The digital model was designed in Catia V5 and calculated using the commercial Ansys CFX software. A numerical computational fluid dynamics (CFD) method was used to investigate the performance characteristics of the pump, specifically discussing internal flow conditions such as velocity, pressure and the radial force mentioned above.

    ]]>
    Numerical Analysis of the Radial Load, Pressure and Velocity Fields of a Single Blade Pump Dávid Bleho Róbert Olšiak Branislav Knížat Marek Mlkvik doi: 10.3390/modelling3040028 Modelling 2022-10-25 Modelling 2022-10-25 3 4
    Article
    434 10.3390/modelling3040028 https://www.mdpi.com/2673-3951/3/4/28
    Modelling, Vol. 3, Pages 417-433: Discrete-Event Simulation in Healthcare Settings: A Review https://www.mdpi.com/2673-3951/3/4/27 We review and define the current state of the art as relating to discrete event simulation in healthcare-related systems. A review of published literature over the past five years (2017–2021) was conducted, building upon previously published work. PubMed and EBSCOhost were searched for journal articles on discrete event simulation in healthcare resulting in identification of 933 unique articles. Of these about half were excluded at the title/abstract level and 154 at the full text level, leaving 311 papers to analyze. These were categorized, then analyzed by category and collectively to identify publication volume over time, disease focus, activity levels by country, software systems used, and sizes of healthcare unit under study. A total of 1196 articles were initially identified. This list was narrowed down to 311 for systematic review. Following the schema from prior systematic reviews, the articles fell into four broad categories: health care systems operations (HCSO), disease progression modeling (DPM), screening modeling (SM), and health behavior modeling (HBM). We found that discrete event simulation in healthcare has continued to increase year-over-year, as well as expand into diverse areas of the healthcare system. In addition, this study adds extra bibliometric dimensions to gain more insight into the details and nuances of how and where simulation is being used in healthcare. 2022-10-14 Modelling, Vol. 3, Pages 417-433: Discrete-Event Simulation in Healthcare Settings: A Review

    Modelling doi: 10.3390/modelling3040027

    Authors: John J. Forbus Daniel Berleant

    We review and define the current state of the art as relating to discrete event simulation in healthcare-related systems. A review of published literature over the past five years (2017–2021) was conducted, building upon previously published work. PubMed and EBSCOhost were searched for journal articles on discrete event simulation in healthcare resulting in identification of 933 unique articles. Of these about half were excluded at the title/abstract level and 154 at the full text level, leaving 311 papers to analyze. These were categorized, then analyzed by category and collectively to identify publication volume over time, disease focus, activity levels by country, software systems used, and sizes of healthcare unit under study. A total of 1196 articles were initially identified. This list was narrowed down to 311 for systematic review. Following the schema from prior systematic reviews, the articles fell into four broad categories: health care systems operations (HCSO), disease progression modeling (DPM), screening modeling (SM), and health behavior modeling (HBM). We found that discrete event simulation in healthcare has continued to increase year-over-year, as well as expand into diverse areas of the healthcare system. In addition, this study adds extra bibliometric dimensions to gain more insight into the details and nuances of how and where simulation is being used in healthcare.

    ]]>
    Discrete-Event Simulation in Healthcare Settings: A Review John J. Forbus Daniel Berleant doi: 10.3390/modelling3040027 Modelling 2022-10-14 Modelling 2022-10-14 3 4
    Systematic Review
    417 10.3390/modelling3040027 https://www.mdpi.com/2673-3951/3/4/27
    Modelling, Vol. 3, Pages 400-416: Derivation of Cyclic Stiffness and Strength Degradation Curves of Sands through Discrete Element Modelling https://www.mdpi.com/2673-3951/3/4/26 Cyclic degradation in fully saturated sands is a liquefaction phenomenon characterized by the progressive variation of the soil strength and stiffness that occurs when the soil is subjected to cyclic loading in undrained conditions. An evaluation of the relationships between the degradation of the soil properties and the number of loading cycles is essential for deriving advanced cyclic constitutive soil models. Generally, the calibration of cyclic damage models can be performed through controlled laboratory tests, such as cyclic triaxial testing. However, the undrained response of soils is dependent on several factors, such as the fabric, sample preparation, initial density, initial stress state, and stress path during loading; hence, a large number of tests would be required. On the other hand, the Discrete Element Method offers an interesting approach to simulating the complex behavior of an assembly of particles, which can be used to perform simulations of geotechnical laboratory testing. In this paper, numerical triaxial analyses of sands with different consistencies, loose and medium-dense states, were performed. First, static triaxial testing was performed to characterize the sand properties and validate the results with the literature data. Then, cyclic undrained triaxial testing was performed to investigate the impact of the number of cycles on the cyclic degradation of the soil stiffness and strength. Laws that can be used in damage soil models were derived. 2022-09-30 Modelling, Vol. 3, Pages 400-416: Derivation of Cyclic Stiffness and Strength Degradation Curves of Sands through Discrete Element Modelling

    Modelling doi: 10.3390/modelling3040026

    Authors: Fedor Maksimov Alessandro Tombari

    Cyclic degradation in fully saturated sands is a liquefaction phenomenon characterized by the progressive variation of the soil strength and stiffness that occurs when the soil is subjected to cyclic loading in undrained conditions. An evaluation of the relationships between the degradation of the soil properties and the number of loading cycles is essential for deriving advanced cyclic constitutive soil models. Generally, the calibration of cyclic damage models can be performed through controlled laboratory tests, such as cyclic triaxial testing. However, the undrained response of soils is dependent on several factors, such as the fabric, sample preparation, initial density, initial stress state, and stress path during loading; hence, a large number of tests would be required. On the other hand, the Discrete Element Method offers an interesting approach to simulating the complex behavior of an assembly of particles, which can be used to perform simulations of geotechnical laboratory testing. In this paper, numerical triaxial analyses of sands with different consistencies, loose and medium-dense states, were performed. First, static triaxial testing was performed to characterize the sand properties and validate the results with the literature data. Then, cyclic undrained triaxial testing was performed to investigate the impact of the number of cycles on the cyclic degradation of the soil stiffness and strength. Laws that can be used in damage soil models were derived.

    ]]>
    Derivation of Cyclic Stiffness and Strength Degradation Curves of Sands through Discrete Element Modelling Fedor Maksimov Alessandro Tombari doi: 10.3390/modelling3040026 Modelling 2022-09-30 Modelling 2022-09-30 3 4
    Article
    400 10.3390/modelling3040026 https://www.mdpi.com/2673-3951/3/4/26
    Modelling, Vol. 3, Pages 385-399: Modelling the Energy Consumption of Driving Styles Based on Clustering of GPS Information https://www.mdpi.com/2673-3951/3/3/25 This paper presents a novel approach to distinguishing driving styles with respect to their energy efficiency. A distinct property of our method is that it relies exclusively on the global positioning system (GPS) logs of drivers. This setting is highly relevant in practice as these data can easily be acquired. Relying on positional data alone means that all features derived from them will be correlated, so we strive to find a single quantity that allows us to perform the driving style analysis. To this end we consider a robust variation of the so-called "jerk" of a movement. We give a detailed analysis that shows how the feature relates to a useful model of energy consumption when driving cars. We show that our feature of choice outperforms other more commonly used jerk-based formulations for automated processing. Furthermore, we discuss the handling of noisy, inconsistent, and incomplete data, as this is a notorious problem when dealing with real-world GPS logs. Our solving strategy relies on an agglomerative hierarchical clustering combined with an L-term heuristic to determine the relevant number of clusters. It can easily be implemented and delivers a quick performance, even on very large, real-world datasets. We analyse the clustering procedure, making use of established quality criteria. Experiments show that our approach is robust against noise and able to discern different driving styles. 2022-09-02 Modelling, Vol. 3, Pages 385-399: Modelling the Energy Consumption of Driving Styles Based on Clustering of GPS Information

    Modelling doi: 10.3390/modelling3030025

    Authors: Michael Breuß Ali Sharifi Boroujerdi Ashkan Mansouri Yarahmadi

    This paper presents a novel approach to distinguishing driving styles with respect to their energy efficiency. A distinct property of our method is that it relies exclusively on the global positioning system (GPS) logs of drivers. This setting is highly relevant in practice as these data can easily be acquired. Relying on positional data alone means that all features derived from them will be correlated, so we strive to find a single quantity that allows us to perform the driving style analysis. To this end we consider a robust variation of the so-called "jerk" of a movement. We give a detailed analysis that shows how the feature relates to a useful model of energy consumption when driving cars. We show that our feature of choice outperforms other more commonly used jerk-based formulations for automated processing. Furthermore, we discuss the handling of noisy, inconsistent, and incomplete data, as this is a notorious problem when dealing with real-world GPS logs. Our solving strategy relies on an agglomerative hierarchical clustering combined with an L-term heuristic to determine the relevant number of clusters. It can easily be implemented and delivers a quick performance, even on very large, real-world datasets. We analyse the clustering procedure, making use of established quality criteria. Experiments show that our approach is robust against noise and able to discern different driving styles.

    ]]>
    Modelling the Energy Consumption of Driving Styles Based on Clustering of GPS Information Michael Breuß Ali Sharifi Boroujerdi Ashkan Mansouri Yarahmadi doi: 10.3390/modelling3030025 Modelling 2022-09-02 Modelling 2022-09-02 3 3
    Article
    385 10.3390/modelling3030025 https://www.mdpi.com/2673-3951/3/3/25
    Modelling, Vol. 3, Pages 374-384: A Numerical Study on the Electrochemical Treatment of Chloride-Contaminated Reinforced Concrete https://www.mdpi.com/2673-3951/3/3/24 Electrochemical treatment, specified as electrochemical chloride extraction (ECE), is one of the common techniques developed for the rehabilitation of chloride-contaminated reinforced concrete. In practice, ECE is time-consuming; for instance, the treatment duration could last several weeks or even longer. In order to reduce the laboratory work, this paper presents some results about a numerical study of the ECE. It is to solve a series of physical equations governing multiple ionic transport making use of a finite difference method. The effects of some critical factors are discussed in detail, such as the treatment duration, the current density and the cover thickness. In addition, for the sake of validation, the numerical results are also compared with those obtained from an experimental test. 2022-08-22 Modelling, Vol. 3, Pages 374-384: A Numerical Study on the Electrochemical Treatment of Chloride-Contaminated Reinforced Concrete

    Modelling doi: 10.3390/modelling3030024

    Authors: Yanan Xi Yun Gao Wenwei Li Dong Lei

    Electrochemical treatment, specified as electrochemical chloride extraction (ECE), is one of the common techniques developed for the rehabilitation of chloride-contaminated reinforced concrete. In practice, ECE is time-consuming; for instance, the treatment duration could last several weeks or even longer. In order to reduce the laboratory work, this paper presents some results about a numerical study of the ECE. It is to solve a series of physical equations governing multiple ionic transport making use of a finite difference method. The effects of some critical factors are discussed in detail, such as the treatment duration, the current density and the cover thickness. In addition, for the sake of validation, the numerical results are also compared with those obtained from an experimental test.

    ]]>
    A Numerical Study on the Electrochemical Treatment of Chloride-Contaminated Reinforced Concrete Yanan Xi Yun Gao Wenwei Li Dong Lei doi: 10.3390/modelling3030024 Modelling 2022-08-22 Modelling 2022-08-22 3 3
    Article
    374 10.3390/modelling3030024 https://www.mdpi.com/2673-3951/3/3/24
    Modelling, Vol. 3, Pages 359-373: Revisiting the Common Practice of Sellars and Tegart’s Hyperbolic Sine Constitutive Model https://www.mdpi.com/2673-3951/3/3/23 The Sellars and Tegart’s hyperbolic sine constitutive model is widely practiced in describing stress–strain curves of metals in hot deformation processes. The acceptance of this phenomenological model is owed to its versatility (working for a wide range of stress values) and simplicity (being only a function of strain, strain rate, and temperature). The common practices of this model are revisited in this work, with a few suggestions to improve its results. Moreover, it is discussed that, with the progress of data-driven models, the main reason for using the Sellars and Tegart’s model should be to identify reliable activation energies, and not the stress–strain curves. Furthermore, a piece of code (Hot Deformation Fitting Tool) has been created to automate the analysis of stress–strain curves with various models. 2022-08-08 Modelling, Vol. 3, Pages 359-373: Revisiting the Common Practice of Sellars and Tegart’s Hyperbolic Sine Constitutive Model

    Modelling doi: 10.3390/modelling3030023

    Authors: Soheil Solhjoo

    The Sellars and Tegart’s hyperbolic sine constitutive model is widely practiced in describing stress–strain curves of metals in hot deformation processes. The acceptance of this phenomenological model is owed to its versatility (working for a wide range of stress values) and simplicity (being only a function of strain, strain rate, and temperature). The common practices of this model are revisited in this work, with a few suggestions to improve its results. Moreover, it is discussed that, with the progress of data-driven models, the main reason for using the Sellars and Tegart’s model should be to identify reliable activation energies, and not the stress–strain curves. Furthermore, a piece of code (Hot Deformation Fitting Tool) has been created to automate the analysis of stress–strain curves with various models.

    ]]>
    Revisiting the Common Practice of Sellars and Tegart’s Hyperbolic Sine Constitutive Model Soheil Solhjoo doi: 10.3390/modelling3030023 Modelling 2022-08-08 Modelling 2022-08-08 3 3
    Article
    359 10.3390/modelling3030023 https://www.mdpi.com/2673-3951/3/3/23
    Modelling, Vol. 3, Pages 344-358: Characterizing Computational Thinking in the Context of Model-Planning Activities https://www.mdpi.com/2673-3951/3/3/22 Computational thinking (CT) is a critical skill needed for STEM professionals and educational interventions that emphasize CT are needed. In engineering, one potential pedagogical tool to build CT is modeling, an essential skill for engineering students where they apply their scientific knowledge to real-world problems involving planning, building, evaluating, and reflecting on created systems to simulate the real world. However, in-depth studies of how modeling is done in the class in relation to CT are limited. We used a case study methodology to evaluate a model-planning activity in a final-year undergraduate engineering classroom to elicit CT practices in students as they planned their modeling approach. Thematic analysis was used on student artifacts to triangulate and identify diverse ways that students used CT practices. We find that model-planning activities are useful for students to practice many aspects of CT, such as abstraction, algorithmic thinking, and generalization. We report implications for instructors wanting to implement model-planning activities into their classrooms. 2022-08-02 Modelling, Vol. 3, Pages 344-358: Characterizing Computational Thinking in the Context of Model-Planning Activities

    Modelling doi: 10.3390/modelling3030022

    Authors: Joseph A. Lyon Alejandra J. Magana Ruth A. Streveler

    Computational thinking (CT) is a critical skill needed for STEM professionals and educational interventions that emphasize CT are needed. In engineering, one potential pedagogical tool to build CT is modeling, an essential skill for engineering students where they apply their scientific knowledge to real-world problems involving planning, building, evaluating, and reflecting on created systems to simulate the real world. However, in-depth studies of how modeling is done in the class in relation to CT are limited. We used a case study methodology to evaluate a model-planning activity in a final-year undergraduate engineering classroom to elicit CT practices in students as they planned their modeling approach. Thematic analysis was used on student artifacts to triangulate and identify diverse ways that students used CT practices. We find that model-planning activities are useful for students to practice many aspects of CT, such as abstraction, algorithmic thinking, and generalization. We report implications for instructors wanting to implement model-planning activities into their classrooms.

    ]]>
    Characterizing Computational Thinking in the Context of Model-Planning Activities Joseph A. Lyon Alejandra J. Magana Ruth A. Streveler doi: 10.3390/modelling3030022 Modelling 2022-08-02 Modelling 2022-08-02 3 3
    Article
    344 10.3390/modelling3030022 https://www.mdpi.com/2673-3951/3/3/22
    Modelling, Vol. 3, Pages 333-343: Classical Molecular Dynamics Simulations of Surface Modifications Triggered by a Femtosecond Laser Pulse https://www.mdpi.com/2673-3951/3/3/21 This work is devoted to classical molecular dynamics simulations of surface modifications (craters) drilled by single femtosecond laser pulses in silicon and diamond, materials relevant for numerous industrial applications. We propose a methodology paving the way towards a significant decrease in the simulation computational costs, which could also enable a precise estimation of the craters’ size and shape. 2022-07-29 Modelling, Vol. 3, Pages 333-343: Classical Molecular Dynamics Simulations of Surface Modifications Triggered by a Femtosecond Laser Pulse

    Modelling doi: 10.3390/modelling3030021

    Authors: Vladimir Lipp Beata Ziaja

    This work is devoted to classical molecular dynamics simulations of surface modifications (craters) drilled by single femtosecond laser pulses in silicon and diamond, materials relevant for numerous industrial applications. We propose a methodology paving the way towards a significant decrease in the simulation computational costs, which could also enable a precise estimation of the craters’ size and shape.

    ]]>
    Classical Molecular Dynamics Simulations of Surface Modifications Triggered by a Femtosecond Laser Pulse Vladimir Lipp Beata Ziaja doi: 10.3390/modelling3030021 Modelling 2022-07-29 Modelling 2022-07-29 3 3
    Article
    333 10.3390/modelling3030021 https://www.mdpi.com/2673-3951/3/3/21
    Modelling, Vol. 3, Pages 314-332: High-Fidelity Digital Twin Data Models by Randomized Dynamic Mode Decomposition and Deep Learning with Applications in Fluid Dynamics https://www.mdpi.com/2673-3951/3/3/20 The purpose of this paper is the identification of high-fidelity digital twin data models from numerical code outputs by non-intrusive techniques (i.e., not requiring Galerkin projection of the governing equations onto the reduced modes basis). In this paper the author defines the concept of the digital twin data model (DTM) as a model of reduced complexity that has the main feature of mirroring the original process behavior. The significant advantage of a DTM is to reproduce the dynamics with high accuracy and reduced costs in CPU time and hardware for settings difficult to explore because of the complexity of the dynamics over time. This paper introduces a new framework for creating efficient digital twin data models by combining two state-of-the-art tools: randomized dynamic mode decomposition and deep learning artificial intelligence. It is shown that the outputs are consistent with the original source data with the advantage of reduced complexity. The DTMs are investigated in the numerical simulation of three shock wave phenomena with increasing complexity. The author performs a thorough assessment of the performance of the new digital twin data models in terms of numerical accuracy and computational efficiency. 2022-07-21 Modelling, Vol. 3, Pages 314-332: High-Fidelity Digital Twin Data Models by Randomized Dynamic Mode Decomposition and Deep Learning with Applications in Fluid Dynamics

    Modelling doi: 10.3390/modelling3030020

    Authors: Diana A. Bistrian

    The purpose of this paper is the identification of high-fidelity digital twin data models from numerical code outputs by non-intrusive techniques (i.e., not requiring Galerkin projection of the governing equations onto the reduced modes basis). In this paper the author defines the concept of the digital twin data model (DTM) as a model of reduced complexity that has the main feature of mirroring the original process behavior. The significant advantage of a DTM is to reproduce the dynamics with high accuracy and reduced costs in CPU time and hardware for settings difficult to explore because of the complexity of the dynamics over time. This paper introduces a new framework for creating efficient digital twin data models by combining two state-of-the-art tools: randomized dynamic mode decomposition and deep learning artificial intelligence. It is shown that the outputs are consistent with the original source data with the advantage of reduced complexity. The DTMs are investigated in the numerical simulation of three shock wave phenomena with increasing complexity. The author performs a thorough assessment of the performance of the new digital twin data models in terms of numerical accuracy and computational efficiency.

    ]]>
    High-Fidelity Digital Twin Data Models by Randomized Dynamic Mode Decomposition and Deep Learning with Applications in Fluid Dynamics Diana A. Bistrian doi: 10.3390/modelling3030020 Modelling 2022-07-21 Modelling 2022-07-21 3 3
    Article
    314 10.3390/modelling3030020 https://www.mdpi.com/2673-3951/3/3/20
    Modelling, Vol. 3, Pages 300-313: Comparison of the Effectiveness of Drag Reduction Devices on a Simplified Truck Model through Numerical Simulation https://www.mdpi.com/2673-3951/3/3/19 The aerodynamic efficiency of trucks is very low because of their non-streamlined box shape, which is subject to practical constraints, leaving little room for improvement in terms of aerodynamic efficiency. Hence, other means of improving the aerodynamic efficiency of trucks are needed, and one practical yet relatively simple method to reduce aerodynamic drag is deploying drag reduction devices on trucks. This paper describes a numerical study of flow over a simplified truck with drag reduction devices. The numerical approach employed was Reynolds-averaged Navier–Stokes (RANS). Four test cases with different drag reduction devices deployed around the tractor–trailer gap region were studied. The effectiveness of those drag reduction devices was assessed, and it was demonstrated that in all four cases, the aerodynamic drag was reduced compared with the baseline case without any drag reduction devices. The most effective device was case 4 (about 24% reduction), with a roof deflector, side extenders, and five cross-flow vortex trap devices (CVTDs). Flow field analysis was performed to shed light on drag reduction mechanisms, which confirmed our previous findings that the main reason for the drag reduction was the reduced pressure on the front face of the trailer, while the reduction in the turbulence level in the tractor–trailer gap region contributed much less to the overall drag reduction. 2022-07-08 Modelling, Vol. 3, Pages 300-313: Comparison of the Effectiveness of Drag Reduction Devices on a Simplified Truck Model through Numerical Simulation

    Modelling doi: 10.3390/modelling3030019

    Authors: Terrance Charles Zhiyin Yang

    The aerodynamic efficiency of trucks is very low because of their non-streamlined box shape, which is subject to practical constraints, leaving little room for improvement in terms of aerodynamic efficiency. Hence, other means of improving the aerodynamic efficiency of trucks are needed, and one practical yet relatively simple method to reduce aerodynamic drag is deploying drag reduction devices on trucks. This paper describes a numerical study of flow over a simplified truck with drag reduction devices. The numerical approach employed was Reynolds-averaged Navier–Stokes (RANS). Four test cases with different drag reduction devices deployed around the tractor–trailer gap region were studied. The effectiveness of those drag reduction devices was assessed, and it was demonstrated that in all four cases, the aerodynamic drag was reduced compared with the baseline case without any drag reduction devices. The most effective device was case 4 (about 24% reduction), with a roof deflector, side extenders, and five cross-flow vortex trap devices (CVTDs). Flow field analysis was performed to shed light on drag reduction mechanisms, which confirmed our previous findings that the main reason for the drag reduction was the reduced pressure on the front face of the trailer, while the reduction in the turbulence level in the tractor–trailer gap region contributed much less to the overall drag reduction.

    ]]>
    Comparison of the Effectiveness of Drag Reduction Devices on a Simplified Truck Model through Numerical Simulation Terrance Charles Zhiyin Yang doi: 10.3390/modelling3030019 Modelling 2022-07-08 Modelling 2022-07-08 3 3
    Article
    300 10.3390/modelling3030019 https://www.mdpi.com/2673-3951/3/3/19
    Modelling, Vol. 3, Pages 272-299: A Framework for Interactive Development of Simulation Models with Strategical–Tactical–Operational Layering Applied to the Logistics of Bulk Commodities https://www.mdpi.com/2673-3951/3/3/18 CONTEXT–Simulation modelling provides insight into hidden dynamics underlying business processes. However, an accurate understanding of operations is necessary for fidelity of the model. This is challenging because of the need to extract the tacit nature of operational knowledge and facilitate the representation of complex processes and decision-making patterns that do not depend on classes, objects, and instantiations. Commonly used industrial simulation, such as Arena®, does not natively support the object-oriented constructs available for software development. OBJECTIVE–This paper proposes a method for developing simulation models that allow process-owners and modellers to jointly build a series of evolutionary models that improve conceptual validity of the executable computer model. APPROACH-Software and Systems Engineering principles were adapted to develop a framework that allows a systematic transition from conceptual to executable model, which allows multiple perspectives to be simultaneously considered. The framework was applied to a logistics case study in a bulk commodities distribution context. FINDINGS–The method guided the development of a set of models that served as scaffolds to allow the natural flow of ideas from a natural language domain to Arena® code. In doing so, modeller and process-owners at strategic, tactical, and operational levels developed and validated the simulation model. ORIGINALITY—This work provides a framework for structuring the development of simulation models. The framework allows the use of non-object-oriented constructs, making it applicable to SIMAN-based simulation languages and packages as Arena®. 2022-06-30 Modelling, Vol. 3, Pages 272-299: A Framework for Interactive Development of Simulation Models with Strategical–Tactical–Operational Layering Applied to the Logistics of Bulk Commodities

    Modelling doi: 10.3390/modelling3030018

    Authors: Andres Guiguet Dirk Pons

    CONTEXT–Simulation modelling provides insight into hidden dynamics underlying business processes. However, an accurate understanding of operations is necessary for fidelity of the model. This is challenging because of the need to extract the tacit nature of operational knowledge and facilitate the representation of complex processes and decision-making patterns that do not depend on classes, objects, and instantiations. Commonly used industrial simulation, such as Arena®, does not natively support the object-oriented constructs available for software development. OBJECTIVE–This paper proposes a method for developing simulation models that allow process-owners and modellers to jointly build a series of evolutionary models that improve conceptual validity of the executable computer model. APPROACH-Software and Systems Engineering principles were adapted to develop a framework that allows a systematic transition from conceptual to executable model, which allows multiple perspectives to be simultaneously considered. The framework was applied to a logistics case study in a bulk commodities distribution context. FINDINGS–The method guided the development of a set of models that served as scaffolds to allow the natural flow of ideas from a natural language domain to Arena® code. In doing so, modeller and process-owners at strategic, tactical, and operational levels developed and validated the simulation model. ORIGINALITY—This work provides a framework for structuring the development of simulation models. The framework allows the use of non-object-oriented constructs, making it applicable to SIMAN-based simulation languages and packages as Arena®.

    ]]>
    A Framework for Interactive Development of Simulation Models with Strategical–Tactical–Operational Layering Applied to the Logistics of Bulk Commodities Andres Guiguet Dirk Pons doi: 10.3390/modelling3030018 Modelling 2022-06-30 Modelling 2022-06-30 3 3
    Article
    272 10.3390/modelling3030018 https://www.mdpi.com/2673-3951/3/3/18
    Modelling, Vol. 3, Pages 255-271: Modeling of a Three-Stage Cascaded Refrigeration System Based on Standard Refrigeration Compressors in Cryogenic Applications above 110 K https://www.mdpi.com/2673-3951/3/2/17 More and more applications, such as natural gas liquefaction, LNG reliquefaction, whole body cryotherapy and cryopreservation, require cooling in the temperature range from 110 to 150 K. This can be achieved in systems using standard refrigeration compressors, which are reliable and cost-effective, but are subject to certain operating limits. This paper investigates the potential of a three-stage cascaded refrigeration system based on standard refrigeration compressors in this range of temperatures. The investigation takes into account the vital limitations of refrigeration compressors and aims to look for possible refrigerant configurations (taking into account PFCs, HFCs, HCs and HOs); performance limitations such as cooling power temperature and system COP; and the influences of system architecture (single-stage and two-stage compression). The paper investigates whether it is possible to design a three-stage cascaded refrigeration system using standard refrigeration compressors, and if so, at what cost? This investigation shows that the three-stage cascaded refrigeration system can reach the lowest temperature of 127 K with a COP of 0.179, which corresponds to a Carnot efficiency of 0.262. Moreover, systems based on natural refrigerants are found to be advantageous in terms of achieved temperatures compared to those that use synthetic refrigerants. Furthermore, only the application of R50 (methane) is shown to allow temperatures below 130 K to be achieved, and this is possible only in a two-stage compression cascade system. For most of the investigated configurations, the suction pressure must be below atmospheric pressure to thermally couple cascade stages. 2022-06-17 Modelling, Vol. 3, Pages 255-271: Modeling of a Three-Stage Cascaded Refrigeration System Based on Standard Refrigeration Compressors in Cryogenic Applications above 110 K

    Modelling doi: 10.3390/modelling3020017

    Authors: Zbigniew Rogala Adrian Kwiatkowski

    More and more applications, such as natural gas liquefaction, LNG reliquefaction, whole body cryotherapy and cryopreservation, require cooling in the temperature range from 110 to 150 K. This can be achieved in systems using standard refrigeration compressors, which are reliable and cost-effective, but are subject to certain operating limits. This paper investigates the potential of a three-stage cascaded refrigeration system based on standard refrigeration compressors in this range of temperatures. The investigation takes into account the vital limitations of refrigeration compressors and aims to look for possible refrigerant configurations (taking into account PFCs, HFCs, HCs and HOs); performance limitations such as cooling power temperature and system COP; and the influences of system architecture (single-stage and two-stage compression). The paper investigates whether it is possible to design a three-stage cascaded refrigeration system using standard refrigeration compressors, and if so, at what cost? This investigation shows that the three-stage cascaded refrigeration system can reach the lowest temperature of 127 K with a COP of 0.179, which corresponds to a Carnot efficiency of 0.262. Moreover, systems based on natural refrigerants are found to be advantageous in terms of achieved temperatures compared to those that use synthetic refrigerants. Furthermore, only the application of R50 (methane) is shown to allow temperatures below 130 K to be achieved, and this is possible only in a two-stage compression cascade system. For most of the investigated configurations, the suction pressure must be below atmospheric pressure to thermally couple cascade stages.

    ]]>
    Modeling of a Three-Stage Cascaded Refrigeration System Based on Standard Refrigeration Compressors in Cryogenic Applications above 110 K Zbigniew Rogala Adrian Kwiatkowski doi: 10.3390/modelling3020017 Modelling 2022-06-17 Modelling 2022-06-17 3 2
    Article
    255 10.3390/modelling3020017 https://www.mdpi.com/2673-3951/3/2/17
    Modelling, Vol. 3, Pages 243-254: Cost Optimization of Reinforced Concrete Section According to Flexural Cracking https://www.mdpi.com/2673-3951/3/2/16 A series of distributed flexural cracks develop in reinforced concrete flexural elements under the working load. The control of cracking in reinforced concrete is an important issue that must be considered in the design of reinforced concrete structures. Crack width and spacing are influenced by several factors, including the steel percentage, its distribution in the concrete cross-section, the concrete cover, and the concrete properties. In practice, however, a compromise must be made between cracking, durability, and ease of construction and cost. This study presents the optimal design of a reinforced concrete cross-section, using the optimization method of mixed-integer nonlinear programming (MINLP) and the Eurocode standard. The MINLP optimization model OPTCON was developed for this purpose. The model contains the objective function of the material cost considering the crack width requirements. The crack width requirements can be satisfied by direct calculation or by limiting the bar spacing. Due to the different crack width requirements, two different economic designs of reinforced concrete sections were proposed. The case study presented in this study demonstrates the value of the presented optimization approach. A direct comparison between different methods for modelling cracking in reinforced concrete cross-sections, which has not been done before, is also presented. 2022-05-25 Modelling, Vol. 3, Pages 243-254: Cost Optimization of Reinforced Concrete Section According to Flexural Cracking

    Modelling doi: 10.3390/modelling3020016

    Authors: Primož Jelušič

    A series of distributed flexural cracks develop in reinforced concrete flexural elements under the working load. The control of cracking in reinforced concrete is an important issue that must be considered in the design of reinforced concrete structures. Crack width and spacing are influenced by several factors, including the steel percentage, its distribution in the concrete cross-section, the concrete cover, and the concrete properties. In practice, however, a compromise must be made between cracking, durability, and ease of construction and cost. This study presents the optimal design of a reinforced concrete cross-section, using the optimization method of mixed-integer nonlinear programming (MINLP) and the Eurocode standard. The MINLP optimization model OPTCON was developed for this purpose. The model contains the objective function of the material cost considering the crack width requirements. The crack width requirements can be satisfied by direct calculation or by limiting the bar spacing. Due to the different crack width requirements, two different economic designs of reinforced concrete sections were proposed. The case study presented in this study demonstrates the value of the presented optimization approach. A direct comparison between different methods for modelling cracking in reinforced concrete cross-sections, which has not been done before, is also presented.

    ]]>
    Cost Optimization of Reinforced Concrete Section According to Flexural Cracking Primož Jelušič doi: 10.3390/modelling3020016 Modelling 2022-05-25 Modelling 2022-05-25 3 2
    Article
    243 10.3390/modelling3020016 https://www.mdpi.com/2673-3951/3/2/16
    Modelling, Vol. 3, Pages 224-242: Methods, Models and Tools for Improving the Quality of Textual Annotations https://www.mdpi.com/2673-3951/3/2/15 In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other. 2022-04-12 Modelling, Vol. 3, Pages 224-242: Methods, Models and Tools for Improving the Quality of Textual Annotations

    Modelling doi: 10.3390/modelling3020015

    Authors: Maria Teresa Artese Isabella Gagliardi

    In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other.

    ]]>
    Methods, Models and Tools for Improving the Quality of Textual Annotations Maria Teresa Artese Isabella Gagliardi doi: 10.3390/modelling3020015 Modelling 2022-04-12 Modelling 2022-04-12 3 2
    Article
    224 10.3390/modelling3020015 https://www.mdpi.com/2673-3951/3/2/15
    Modelling, Vol. 3, Pages 201-223: Improving Mobile Game Performance with Basic Optimization Techniques in Unity https://www.mdpi.com/2673-3951/3/2/14 Creating video games can be a very complex process, which requires taking into account various hardware and software limitations. This process is even more complex for mobile games, which are limited to the resources that their platforms (mobile devices) offer in comparison to game consoles and personal computers. This restriction makes performance one of the top critical requirements, meaning that a videogame should be designed and developed more carefully. In order to reduce the resources that a game uses, there are optimization techniques that can be applied in different stages of the development. For the purposes of this article, we designed and developed a simple shooter videogame, intended for Android mobile devices. The game was developed with the Unity game engine and most of the models were designed with the 3D computer graphics software Blender. Two versions of the game were developed in order to study the differences in performance: one version that applies basic optimization techniques, such as low poly count for the models and the object pooling algorithm for the enemy’s spawn; and one where the aforementioned optimizations were not used. Even though the game is not large in scale, the optimized version achieves a better user experience and needs less resources in order to run smoothly. This means that in larger and more complex video games these optimizations could have a bigger impact on the performance of the final product. To measure how the techniques affected the two versions of the game, the values of frames per second, batches and triangles/polygons were taken under consideration and used as metrics for game performance in terms of CPU usage, rendering (GPU usage) and memory usage. 2022-03-28 Modelling, Vol. 3, Pages 201-223: Improving Mobile Game Performance with Basic Optimization Techniques in Unity

    Modelling doi: 10.3390/modelling3020014

    Authors: Georgios Koulaxidis Stelios Xinogalos

    Creating video games can be a very complex process, which requires taking into account various hardware and software limitations. This process is even more complex for mobile games, which are limited to the resources that their platforms (mobile devices) offer in comparison to game consoles and personal computers. This restriction makes performance one of the top critical requirements, meaning that a videogame should be designed and developed more carefully. In order to reduce the resources that a game uses, there are optimization techniques that can be applied in different stages of the development. For the purposes of this article, we designed and developed a simple shooter videogame, intended for Android mobile devices. The game was developed with the Unity game engine and most of the models were designed with the 3D computer graphics software Blender. Two versions of the game were developed in order to study the differences in performance: one version that applies basic optimization techniques, such as low poly count for the models and the object pooling algorithm for the enemy’s spawn; and one where the aforementioned optimizations were not used. Even though the game is not large in scale, the optimized version achieves a better user experience and needs less resources in order to run smoothly. This means that in larger and more complex video games these optimizations could have a bigger impact on the performance of the final product. To measure how the techniques affected the two versions of the game, the values of frames per second, batches and triangles/polygons were taken under consideration and used as metrics for game performance in terms of CPU usage, rendering (GPU usage) and memory usage.

    ]]>
    Improving Mobile Game Performance with Basic Optimization Techniques in Unity Georgios Koulaxidis Stelios Xinogalos doi: 10.3390/modelling3020014 Modelling 2022-03-28 Modelling 2022-03-28 3 2
    Article
    201 10.3390/modelling3020014 https://www.mdpi.com/2673-3951/3/2/14
    Modelling, Vol. 3, Pages 189-200: Developing a Framework for Using Molecular Dynamics in Additive Manufacturing Process Modelling https://www.mdpi.com/2673-3951/3/1/13 Additive Manufacturing (AM), or else Smart Manufacturing, has been an intrinsic concept in Industry 4.0, offering flexibility and material efficiency. Certain limitations prevent AM from being used in the industrial setting extensively, despite its advantages. Therefore, a literature review on the process modelling approaches, their advantages and limitations was performed. The most frequently used process modelling approaches were reviewed and summarized with respect to the process modelling approach, scale and limitations. The different categories of process modelling approaches were compared, with molecular dynamics being a promising modelling technique that can be used in software applications. A new framework for modelling additive manufacturing processes based on molecular dynamics was proposed in this work, combining previously published manufacturing methodologies for the AM process, such as manufacturability, design and planning of the AM. A validation plan followed, with the main parameters and details highlighted. The proposed framework is offering a unique approach for modelling the AM process, based on parameters from the manufacturing design, planning and process. This framework will be used in software platforms for predicting temperature distributions and for optimizing shape and AM process. 2022-03-18 Modelling, Vol. 3, Pages 189-200: Developing a Framework for Using Molecular Dynamics in Additive Manufacturing Process Modelling

    Modelling doi: 10.3390/modelling3010013

    Authors: Panagiotis Stavropoulos Vasiliki Christina Panagiotopoulou

    Additive Manufacturing (AM), or else Smart Manufacturing, has been an intrinsic concept in Industry 4.0, offering flexibility and material efficiency. Certain limitations prevent AM from being used in the industrial setting extensively, despite its advantages. Therefore, a literature review on the process modelling approaches, their advantages and limitations was performed. The most frequently used process modelling approaches were reviewed and summarized with respect to the process modelling approach, scale and limitations. The different categories of process modelling approaches were compared, with molecular dynamics being a promising modelling technique that can be used in software applications. A new framework for modelling additive manufacturing processes based on molecular dynamics was proposed in this work, combining previously published manufacturing methodologies for the AM process, such as manufacturability, design and planning of the AM. A validation plan followed, with the main parameters and details highlighted. The proposed framework is offering a unique approach for modelling the AM process, based on parameters from the manufacturing design, planning and process. This framework will be used in software platforms for predicting temperature distributions and for optimizing shape and AM process.

    ]]>
    Developing a Framework for Using Molecular Dynamics in Additive Manufacturing Process Modelling Panagiotis Stavropoulos Vasiliki Christina Panagiotopoulou doi: 10.3390/modelling3010013 Modelling 2022-03-18 Modelling 2022-03-18 3 1
    Article
    189 10.3390/modelling3010013 https://www.mdpi.com/2673-3951/3/1/13
    Modelling, Vol. 3, Pages 177-188: A Numerical Simulation of Electrical Resistivity of Fiber-Reinforced Composites, Part 2: Flexible Bituminous Asphalt https://www.mdpi.com/2673-3951/3/1/12 Asphalt concrete pavements are vulnerable to freeze-thaw cycles. Consecutive cracking and penetration of corrosive agents can expedite the degradation of asphalt pavements and result in weight loss and reduced strength. Fiber reinforcement in flexible bituminous asphalt bridge cracks limits the crack width and enhances the toughness of the composite. Furthermore, steel fibers facilitate asphalt heating during maintenance and repair operations. Electrical resistivity is a vital parameter to measure the efficiency of these operations and to identify the state of degradation in fiber-reinforced asphalt concrete. The significant difference between conductivities of steel fibers and bituminous matrix warrants in-depth investigations of the influence of fiber reinforcement on the measured surface electrical resistivity of placed pavements. Numerical simulations endeavor to predict the resistivity and associated deviations due to randomly distributed fiber reinforcement. Results and discussions reveal the sources and magnitudes of fiber geometry and content adjustments. Outcomes investigate associated errors for practical applications. 2022-03-17 Modelling, Vol. 3, Pages 177-188: A Numerical Simulation of Electrical Resistivity of Fiber-Reinforced Composites, Part 2: Flexible Bituminous Asphalt

    Modelling doi: 10.3390/modelling3010012

    Authors: Rojina Ehsani Alireza Miri Fariborz M. Tehrani

    Asphalt concrete pavements are vulnerable to freeze-thaw cycles. Consecutive cracking and penetration of corrosive agents can expedite the degradation of asphalt pavements and result in weight loss and reduced strength. Fiber reinforcement in flexible bituminous asphalt bridge cracks limits the crack width and enhances the toughness of the composite. Furthermore, steel fibers facilitate asphalt heating during maintenance and repair operations. Electrical resistivity is a vital parameter to measure the efficiency of these operations and to identify the state of degradation in fiber-reinforced asphalt concrete. The significant difference between conductivities of steel fibers and bituminous matrix warrants in-depth investigations of the influence of fiber reinforcement on the measured surface electrical resistivity of placed pavements. Numerical simulations endeavor to predict the resistivity and associated deviations due to randomly distributed fiber reinforcement. Results and discussions reveal the sources and magnitudes of fiber geometry and content adjustments. Outcomes investigate associated errors for practical applications.

    ]]>
    A Numerical Simulation of Electrical Resistivity of Fiber-Reinforced Composites, Part 2: Flexible Bituminous Asphalt Rojina Ehsani Alireza Miri Fariborz M. Tehrani doi: 10.3390/modelling3010012 Modelling 2022-03-17 Modelling 2022-03-17 3 1
    Article
    177 10.3390/modelling3010012 https://www.mdpi.com/2673-3951/3/1/12
    Modelling, Vol. 3, Pages 164-176: A Numerical Simulation of Electrical Resistivity of Fiber-Reinforced Composites, Part 1: Brittle Cementitious Concrete https://www.mdpi.com/2673-3951/3/1/11 The durability of concrete has a significant influence on the sustainability and resilience of various infrastructures, including buildings, bridges, roadways, dams, and other applications. Penetration of corrosive agents intensified by exposure to freeze-thaw cycles and the presence of early-age cracks is a common cause of reinforced concrete degradation. Electrical resistivity is a vital physical property of cementitious composites to assess the remained service life of reinforced concrete members subjected to corrosive ions attacks. The application of steel fibers reduces the vulnerability of concrete by limiting crack propagation, but complicates field and laboratory testing due to the random distribution of conductive fibers within the body of the concrete. Numerical simulations facilitate proper modeling of such random distribution to improve the reliability of testing measures. Hence, this paper investigates the influence of fiber reinforcement characteristics on electrical resistivity using multi-physics finite element models. Results examine modeling challenges and include insights on the sensitivity of resistivity measures to fiber reinforcement. Concluding remarks provide expected bias of electrical resistivity in the presence of steel fibers and endeavor to facilitate the development of practical guidelines for assessing the durability of fiber-reinforced concrete members using standard electrical resistivity testing procedures. 2022-03-17 Modelling, Vol. 3, Pages 164-176: A Numerical Simulation of Electrical Resistivity of Fiber-Reinforced Composites, Part 1: Brittle Cementitious Concrete

    Modelling doi: 10.3390/modelling3010011

    Authors: Alireza Miri Rojina Ehsani Fariborz M. Tehrani

    The durability of concrete has a significant influence on the sustainability and resilience of various infrastructures, including buildings, bridges, roadways, dams, and other applications. Penetration of corrosive agents intensified by exposure to freeze-thaw cycles and the presence of early-age cracks is a common cause of reinforced concrete degradation. Electrical resistivity is a vital physical property of cementitious composites to assess the remained service life of reinforced concrete members subjected to corrosive ions attacks. The application of steel fibers reduces the vulnerability of concrete by limiting crack propagation, but complicates field and laboratory testing due to the random distribution of conductive fibers within the body of the concrete. Numerical simulations facilitate proper modeling of such random distribution to improve the reliability of testing measures. Hence, this paper investigates the influence of fiber reinforcement characteristics on electrical resistivity using multi-physics finite element models. Results examine modeling challenges and include insights on the sensitivity of resistivity measures to fiber reinforcement. Concluding remarks provide expected bias of electrical resistivity in the presence of steel fibers and endeavor to facilitate the development of practical guidelines for assessing the durability of fiber-reinforced concrete members using standard electrical resistivity testing procedures.

    ]]>
    A Numerical Simulation of Electrical Resistivity of Fiber-Reinforced Composites, Part 1: Brittle Cementitious Concrete Alireza Miri Rojina Ehsani Fariborz M. Tehrani doi: 10.3390/modelling3010011 Modelling 2022-03-17 Modelling 2022-03-17 3 1
    Article
    164 10.3390/modelling3010011 https://www.mdpi.com/2673-3951/3/1/11
    Modelling, Vol. 3, Pages 140-163: Theoretical Study of Some Angle Parameter Trigonometric Copulas https://www.mdpi.com/2673-3951/3/1/10 Copulas are important probabilistic tools to model and interpret the correlations of measures involved in real or experimental phenomena. The versatility of these phenomena implies the need for diverse copulas. In this article, we describe and investigate theoretically new two-dimensional copulas based on trigonometric functions modulated by a tuning angle parameter. The independence copula is, thus, extended in an original manner. Conceptually, the proposed trigonometric copulas are ideal for modeling correlations into periodic, circular, or seasonal phenomena. We examine their qualities, such as various symmetry properties, quadrant dependence properties, possible Archimedean nature, copula ordering, tail dependences, diverse correlations (medial, Spearman, and Kendall), and two-dimensional distribution generation. The proposed copulas are fleshed out in terms of data generation and inference. The theoretical findings are supplemented by some graphical and numerical work. The main results are proved using two-dimensional inequality techniques that can be used for other copula purposes. 2022-03-10 Modelling, Vol. 3, Pages 140-163: Theoretical Study of Some Angle Parameter Trigonometric Copulas

    Modelling doi: 10.3390/modelling3010010

    Authors: Christophe Chesneau

    Copulas are important probabilistic tools to model and interpret the correlations of measures involved in real or experimental phenomena. The versatility of these phenomena implies the need for diverse copulas. In this article, we describe and investigate theoretically new two-dimensional copulas based on trigonometric functions modulated by a tuning angle parameter. The independence copula is, thus, extended in an original manner. Conceptually, the proposed trigonometric copulas are ideal for modeling correlations into periodic, circular, or seasonal phenomena. We examine their qualities, such as various symmetry properties, quadrant dependence properties, possible Archimedean nature, copula ordering, tail dependences, diverse correlations (medial, Spearman, and Kendall), and two-dimensional distribution generation. The proposed copulas are fleshed out in terms of data generation and inference. The theoretical findings are supplemented by some graphical and numerical work. The main results are proved using two-dimensional inequality techniques that can be used for other copula purposes.

    ]]>
    Theoretical Study of Some Angle Parameter Trigonometric Copulas Christophe Chesneau doi: 10.3390/modelling3010010 Modelling 2022-03-10 Modelling 2022-03-10 3 1
    Article
    140 10.3390/modelling3010010 https://www.mdpi.com/2673-3951/3/1/10
    Modelling, Vol. 3, Pages 127-139: On the Design of Composite Patch Repair for Strengthening of Marine Plates Subjected to Compressive Loads https://www.mdpi.com/2673-3951/3/1/9 Marine structures are susceptible to corrosion that accelerates material wastage. This phenomenon could lead to thickness reduction to the extent in which local buckling instabilities may occur. The majority of existing repair techniques require welding, which is a restricting factor in flammable environments where hot work is prohibited. A novel repair methodology that has attracted the research focus for over two decades is the adhesive bonding of a composite patch on a ship’s damaged plating. Although most studies have been focused on patch repair against crack propagation, restoring the initial buckling strength of corroded marine plates is of high interest. In this work, this technique is assessed using numerical experimentation through finite element analysis (FEA) with the patch’s dimensions as design parameters. The results are then evaluated using a design-of-experiments (DOE) approach by generating a response surface from central composite design (CCD) points. Applying this methodology to various plates and patches makes it possible to create a repair design procedure that specifies the minimum patch requirements depending on the metal substrate’s dimensions and corrosion realized. 2022-03-01 Modelling, Vol. 3, Pages 127-139: On the Design of Composite Patch Repair for Strengthening of Marine Plates Subjected to Compressive Loads

    Modelling doi: 10.3390/modelling3010009

    Authors: Nikos Kallitsis Konstantinos Anyfantis

    Marine structures are susceptible to corrosion that accelerates material wastage. This phenomenon could lead to thickness reduction to the extent in which local buckling instabilities may occur. The majority of existing repair techniques require welding, which is a restricting factor in flammable environments where hot work is prohibited. A novel repair methodology that has attracted the research focus for over two decades is the adhesive bonding of a composite patch on a ship’s damaged plating. Although most studies have been focused on patch repair against crack propagation, restoring the initial buckling strength of corroded marine plates is of high interest. In this work, this technique is assessed using numerical experimentation through finite element analysis (FEA) with the patch’s dimensions as design parameters. The results are then evaluated using a design-of-experiments (DOE) approach by generating a response surface from central composite design (CCD) points. Applying this methodology to various plates and patches makes it possible to create a repair design procedure that specifies the minimum patch requirements depending on the metal substrate’s dimensions and corrosion realized.

    ]]>
    On the Design of Composite Patch Repair for Strengthening of Marine Plates Subjected to Compressive Loads Nikos Kallitsis Konstantinos Anyfantis doi: 10.3390/modelling3010009 Modelling 2022-03-01 Modelling 2022-03-01 3 1
    Article
    127 10.3390/modelling3010009 https://www.mdpi.com/2673-3951/3/1/9
    Modelling, Vol. 3, Pages 105-126: Data Integration and Interoperability: Towards a Model-Driven and Pattern-Oriented Approach https://www.mdpi.com/2673-3951/3/1/8 Data integration is one of the core responsibilities of EDM (enterprise data management) and interoperability. It is essential for almost every digitalization project, e.g., during the migration from a legacy ERP (enterprise resource planning) software to a new system. One challenge is the incompatibility of data models, i.e., different software systems use specific or proprietary terminology, data structures, data formats, and semantics. Data need to be interchanged between software systems, and often complex data conversions or transformations are necessary. This paper presents an approach that allows software engineers or data experts to use models and patterns in order to specify data integration: it is based on data models such as ER (entity-relationship) diagrams or UML (unified modeling language) class models that are well-accepted and widely used in practice. Predefined data integration patterns are combined (applied) on the model level leading to formal, precise, and concise definitions of data transformations and conversions. Data integration definitions can then be executed (via code generation) so that a manual implementation is not necessary. The advantages are that existing data models can be reused, standardized data integration patterns lead to fast results, and data integration specifications are executable and can be easily maintained and extended. An example transformation of elements of a relational data model to object-oriented data structures shows the approach in practice. Its focus is on data mappings and relationships. 2022-02-27 Modelling, Vol. 3, Pages 105-126: Data Integration and Interoperability: Towards a Model-Driven and Pattern-Oriented Approach

    Modelling doi: 10.3390/modelling3010008

    Authors: Roland J. Petrasch Richard R. Petrasch

    Data integration is one of the core responsibilities of EDM (enterprise data management) and interoperability. It is essential for almost every digitalization project, e.g., during the migration from a legacy ERP (enterprise resource planning) software to a new system. One challenge is the incompatibility of data models, i.e., different software systems use specific or proprietary terminology, data structures, data formats, and semantics. Data need to be interchanged between software systems, and often complex data conversions or transformations are necessary. This paper presents an approach that allows software engineers or data experts to use models and patterns in order to specify data integration: it is based on data models such as ER (entity-relationship) diagrams or UML (unified modeling language) class models that are well-accepted and widely used in practice. Predefined data integration patterns are combined (applied) on the model level leading to formal, precise, and concise definitions of data transformations and conversions. Data integration definitions can then be executed (via code generation) so that a manual implementation is not necessary. The advantages are that existing data models can be reused, standardized data integration patterns lead to fast results, and data integration specifications are executable and can be easily maintained and extended. An example transformation of elements of a relational data model to object-oriented data structures shows the approach in practice. Its focus is on data mappings and relationships.

    ]]>
    Data Integration and Interoperability: Towards a Model-Driven and Pattern-Oriented Approach Roland J. Petrasch Richard R. Petrasch doi: 10.3390/modelling3010008 Modelling 2022-02-27 Modelling 2022-02-27 3 1
    Article
    105 10.3390/modelling3010008 https://www.mdpi.com/2673-3951/3/1/8
    Modelling, Vol. 3, Pages 94-104: Estimating the Benefits of Korea’s Intercity Rail Speed Increase Project: An Agent-Based Model Approach https://www.mdpi.com/2673-3951/3/1/7 In the cost–benefit analysis of urban transportation investment, a logsum-based benefit calculation is widely used. However, it is rarely applied to inter-regional transportation. In this study, we applied a logsum-based approach to the calculation of benefits for high-speed projects for inter-regional railways in Korea’s long-term transportation plan. Moreover, we applied a behavioral model in which an agent travels beyond the zones assumed by an aggregate model. In the case of South Korea, such a model is important for determining transportation priorities: whether to specialize in mobility improvement by investing in a high-speed railway project, such as the 300 km/h Korea Train eXpress (KTX), or to improve existing facilities, such as by building a relatively slower railroad (150–250 km/h) to enhance existing mobility and accessibility. In this context, if a new, relatively slow railroad were constructed adjacent to a high-speed railroad, the benefits would be negligible since the reduction in travel time would not sufficiently reflect accessibility improvements. Therefore, this study proposes the use of aggregate and agent-based models to evaluate projects to improve intercity railway service and conduct a case study with the proposed new methodology. A logsum was selected to account for the benefits of passenger cars on semi-high-speed and high-speed railroads simultaneously since it has been widely used to estimate the benefits of new modes or relatively slow modes. To calculate the logsum, this study used input data from both the aggregate and individual agent-based models, and found that an analysis of the feasibility of inter-regional railroad investment was possible. Moreover, the agent-based model can also be applied to inter-regional analysis. The proposed methods are expected to enable a more comprehensive evaluation of the transport system. In the case of the agent-based model, it is suggested that further studies undertake more detailed scenario analysis and travel time estimation. 2022-01-30 Modelling, Vol. 3, Pages 94-104: Estimating the Benefits of Korea’s Intercity Rail Speed Increase Project: An Agent-Based Model Approach

    Modelling doi: 10.3390/modelling3010007

    Authors: Chansung Kim Heesub Rim DongIk Oh Dongwoon Kang

    In the cost–benefit analysis of urban transportation investment, a logsum-based benefit calculation is widely used. However, it is rarely applied to inter-regional transportation. In this study, we applied a logsum-based approach to the calculation of benefits for high-speed projects for inter-regional railways in Korea’s long-term transportation plan. Moreover, we applied a behavioral model in which an agent travels beyond the zones assumed by an aggregate model. In the case of South Korea, such a model is important for determining transportation priorities: whether to specialize in mobility improvement by investing in a high-speed railway project, such as the 300 km/h Korea Train eXpress (KTX), or to improve existing facilities, such as by building a relatively slower railroad (150–250 km/h) to enhance existing mobility and accessibility. In this context, if a new, relatively slow railroad were constructed adjacent to a high-speed railroad, the benefits would be negligible since the reduction in travel time would not sufficiently reflect accessibility improvements. Therefore, this study proposes the use of aggregate and agent-based models to evaluate projects to improve intercity railway service and conduct a case study with the proposed new methodology. A logsum was selected to account for the benefits of passenger cars on semi-high-speed and high-speed railroads simultaneously since it has been widely used to estimate the benefits of new modes or relatively slow modes. To calculate the logsum, this study used input data from both the aggregate and individual agent-based models, and found that an analysis of the feasibility of inter-regional railroad investment was possible. Moreover, the agent-based model can also be applied to inter-regional analysis. The proposed methods are expected to enable a more comprehensive evaluation of the transport system. In the case of the agent-based model, it is suggested that further studies undertake more detailed scenario analysis and travel time estimation.

    ]]>
    Estimating the Benefits of Korea’s Intercity Rail Speed Increase Project: An Agent-Based Model Approach Chansung Kim Heesub Rim DongIk Oh Dongwoon Kang doi: 10.3390/modelling3010007 Modelling 2022-01-30 Modelling 2022-01-30 3 1
    Article
    94 10.3390/modelling3010007 https://www.mdpi.com/2673-3951/3/1/7
    Modelling, Vol. 3, Pages 92-93: Acknowledgment to Reviewers of Modelling in 2021 https://www.mdpi.com/2673-3951/3/1/6 Rigorous peer-reviews are the basis of high-quality academic publishing [...] 2022-01-30 Modelling, Vol. 3, Pages 92-93: Acknowledgment to Reviewers of Modelling in 2021

    Modelling doi: 10.3390/modelling3010006

    Authors: Modelling Editorial Office Modelling Editorial Office

    Rigorous peer-reviews are the basis of high-quality academic publishing [...]

    ]]>
    Acknowledgment to Reviewers of Modelling in 2021 Modelling Editorial Office Modelling Editorial Office doi: 10.3390/modelling3010006 Modelling 2022-01-30 Modelling 2022-01-30 3 1
    Editorial
    92 10.3390/modelling3010006 https://www.mdpi.com/2673-3951/3/1/6
    -