Digitalization and Dynamic Criticality Analysis for Railway Asset Management


1. Introduction and Background

Rail infrastructure maintenance management faces the challenge of ensuring safe and efficient operations in the face of increasing demand for faster and more reliable transport services. As an essential part of the global transport network, rail infrastructure facilitates the movement of people and goods. In another way, digitalization is revolutionizing asset management by improving the accuracy and efficiency of information handling [1]. In the current context, rail asset management faces several challenges:
  • One of the main problems is the reliance on traditional, static methods that rely on past experience and expert judgement, which have shown limitations, especially in adapting to changes and managing unforeseen emergencies.

  • Integrating legacy systems with new digital tools is another critical challenge. Many rail networks still rely on outdated systems that are not compatible with modern digital technologies. Upgrading these systems requires significant investment and technical expertise, which can be a considerable hurdle for many organizations.

  • Management of the sheer volume of data generated by IoT devices. These data require robust management capabilities to be useful for strategic and operational decision-making. In addition, the interoperability of maintenance management systems is crucial for effectively implementing digital technologies across the rail network. Achieving this interoperability can be complex.

  • Cybersecurity has become a critical concern due to increased connectivity and dependence on digital systems. Protecting these systems against cyber threats is essential to ensure security and operational continuity.

  • Finally, effective implementation of digital technologies remains a challenge, especially in networks with older or less digitized infrastructure. This implementation requires technological, organizational, and training changes to ensure staff adapt to new technologies.

Digitalization and criticality analysis together offer innovative solutions to address challenges in railway asset management, sensors embedded in the infrastructure enable the automated collection and processing of real-time data on asset conditions, such as track wear and tear and train conditions. In addition, digital twins, continuously updated with these data, allow operational scenarios to be simulated and potential failures to be foreseen before they occur.

  • On the one hand, digitalization

    Enables the integration of advanced technologies such as IoT, Big Data, and digital twins, facilitating the collection and processing of real-time data on the condition of railway assets. This not only improves the accuracy of asset condition assessment but also enables a faster and more efficient response to changing conditions and unforeseen emergencies [2].
    Facilitates interoperability and system integration through the use of advanced data models, such as the asset administration shell (AAS), which enables the representation of the digital characteristics of a physical asset. This enables advanced and efficient management throughout the asset lifecycle, overcoming the limitations of legacy systems [3].
  • On the other hand, criticality analysis

    Is a methodology that identifies and prioritizes system components whose failure would negatively impact operations. When combined with digitalization, this analysis becomes more dynamic and accurate, allowing the continuous and on-demand assessment of the railway assets. This technique optimizes resource allocation by focusing on the most critical assets, thus improving safety and operational efficiency.

In other words, digitalization and criticality analysis address technical and operational challenges and promote a proactive and predictive approach to railway asset management. This approach contributes to increased rail infrastructure reliability and safety, marking a significant advance in asset management in the digital age.

With the above in mind, the paper develops an approach to railway asset management through digitalization and criticality analysis. To this end, a data model is proposed where through a structured extract, transform, and load (ETL) process and the development of a criticality evaluation algorithm according to this data model, enabling an automated and accurate assessment of the risk associated with railway assets.

This paper emphasizes the importance of a holistic approach that combines digital tools with organizational change management as a part of this transversal approach. In addition, it highlights the need for standardization [4] and interoperability to integrate legacy systems with new technologies, thus overcoming the current limitations in asset management.
The process aims to optimize resource allocation and improve operational safety through the digitalization of railway maintenance, including data collection and processing, sub-model encapsulation, criticality assessment, and generation of risk alerts. The purpose is to demonstrate that digitalization is a key enabler for managing criticality in complex railway systems, improving reliability, safety, and operational efficiency while addressing technical and organizational challenges. Criticality analysis is essential in managing complex systems, such as rail and utility networks, to identify and prioritize elements whose failure could negatively impact operations [5]. This analysis focuses on tracks, signaling systems and control units, critical safety, and operational efficiency components in rail networks.
In this context, Rodriguez et al. [6] develop the application of operational characteristics of the asset (OCA) like a vector of attributes representing a list of properties that describe the asset. In the railway case, the values of this vector are the different attributes that affect the performance or criticality of the asset. Each attribute (a1, a2,…, an) describes a specific operational characteristic of the railway asset. Examples of operational attributes are as follows:
  • Location of the asset (on a journey or at a station): This attribute tells you whether the asset is in a train station or along a journey between stations. Location can influence the frequency of failures due to wear and tear and the consequences of failures, as a failure at a station could have a different impact than a failure at a track.

  • Network type (high-speed, conventional, suburban): Depending on the network, assets may be subject to different voltage levels, speeds, and usage, affecting the likelihood of failure and its consequences. For example, a high-speed network can have more severe consequences if it fails.

  • Traffic speed: Assets located on sections where trains run at higher speeds may be subjected to more significant mechanical stress, increasing failure frequency. In addition, if a failure occurs, the consequences could be more severe due to the high speed.

  • Asset location: Assets in curves or tunnels may have more adverse conditions affecting their durability. In a curve, trains can generate greater friction, and in a tunnel, ventilation and humidity conditions can be different, affecting both the frequency of failures and the consequences of those failures (e.g., longer or more difficult-to-repair service interruptions).

  • Impact on failure frequency: Each attribute can increase or decrease the likelihood of the asset failing. For example, a curve or tunnel asset might fail more often than one in a straight, open section due to more demanding operating conditions.

  • Impact on the consequences of failures: In addition to frequency, these attributes affect the consequences when the asset fails. A failure in a tunnel can be more challenging to repair and have more severe consequences than one in an open area. Similarly, a failure in a high-speed network could cause more severe delays than in a suburban network.

This analysis allows asset managers to prioritize resources towards the most critical elements, improving system safety and reliability [7]. The technique facilitates proactive maintenance planning, reducing unexpected failures and ensuring the continued operability of the network. A failure in any of these can disrupt service and compromise security. Criticality analysis identifies vulnerable elements and prioritizes their maintenance, minimizing risks of unforeseen failures [8].
In other networks, complex network analysis (CNA) and graph Fourier transform (GFT) are applied to optimize asset management and enhance criticality assessment [9,10]. Additionally, machine learning techniques are employed to predict failures and detect anomalies [11]. Each network’s unique characteristics influence the selection of criticality analysis models and the corresponding data. Table 1 provides a summary of these network characteristics.
While both rail and utility networks use criticality analysis methodologies and advanced digital tools, there are vital differences in the criteria and approaches used to assess criticality. In Table 2, some of the “Key elements” that are considered in this context are reviewed and compared, highlighting the differences between one type of network and another. Criticality analysis in rail and service networks shows that although the methodologies and technologies used for analysis may be similar, the objectives and criteria for each system vary significantly depending on the nature of the network. In rail networks, the focus is on operational safety and service continuity, where the failure of critical components can directly impact passenger safety and system operation, unlike utility networks that prioritize resilience and quality of service, focusing on ensuring continuity of supply. Digitalization, including technologies such as digital twins and predictive analytics, plays a key role in both contexts, enabling greater efficiency in maintenance planning and asset management. However, the challenge remains in the effective implementation of these technologies, especially in utility networks with older or less digitized infrastructure.
The objective of this research is to develop a digitalized data model that enables more accurate and real-time criticality analysis in railway maintenance management. By integrating an ETL process and advanced technologies such as digital twins and asset attribute characterization (AAC), this study aims to optimize the identification and prioritization of critical assets, thereby enhancing operational efficiency, resource allocation, and safety. Section 2 presents a comprehensive review of the relevant literature to establish the theoretical framework for the proposed model, drawing on recent advancements in digitalization and predictive maintenance. The methodology is developed in Section 3, including the ETL process and the model for criticality assessment. A case study focusing on a commuter rail network is shown in Section 4. In Section 5, the results are analyzed, and the practical implications are discussed, with recommendations for future research. Finally, the conclusions are exposed in Section 6.

2. Literature Review

The theories and concepts related to digitalization and criticality analysis in railway maintenance focus on transforming traditional practices towards more advanced and efficient approaches through digital technologies. Relevant studies such as [12,13] follow this line. These concepts and theories are fundamental to understanding how digitalization can improve railway asset management, enabling more efficient and safer maintenance practices. The previous research has explored how digital technologies can transform and optimize railway maintenance.
  • Crespo Márquez [3] proposes a model incorporating digitalization as a crucial enabling factor, facilitating extensive assessments in organizations managing millions of assets.
  • The Asset Management Working Group [4], corresponding to the International Union of Railways (UIC) guidance, suggests a model capable of managing risk throughout the asset lifecycle, integrating operations and maintenance management. This approach implies a high degree of complexity in handling multiple data sources, which makes the implementation of advanced technologies such as digital twins essential.
  • Digital twin studies in [14,15] explore using digital twins to improve railway asset management. These twins allow real-time simulation and analysis of asset behavior, improving criticality analysis and enabling a more proactive approach to maintenance.
  • Research on IoT and Big Data in rail asset management has been extensively studied in references such as [16,17], highlighting how these technologies enable real-time data collection and analysis to optimize decision-making and improve operational efficiency.
  • The use of artificial intelligence (AI) in railway maintenance has shown substantial advantages, especially in identifying and predicting defects using techniques such as computer vision and machine learning [18]. For example, automated visual track inspection allows rail geometry defects and component wear to be detected with significantly higher accuracy than traditional manual inspections. In addition, deep learning algorithms can detect objects on roads and segment signals in complex environments, improving safety and reducing the risk of accidents. However, the deployment of AI in this sector faces critical challenges. One of the biggest challenges is the lack of specific standards and regulations that ensure the safety and reliability of AI-based systems. In addition, AI integration requires a large amount of high-quality data, the availability and accuracy of which can vary considerably between railway systems. Despite these challenges [19], AI has the potential to transform railway maintenance management, enabling real-time scenario simulation and predictive alert generation that improve operational efficiency and safety.
  • Several studies, shown in Table 3, have laid the groundwork for implementing digital technologies in rail asset management, demonstrating their potential to improve the safety, efficiency, and sustainability of rail operations. The current literature on the digitalization and management of railway assets has identified several gaps that limit the full exploitation of digital technologies in this sector.

    Heterogeneous Data Integration: One of the main challenges is integrating extensive volumes of data from various sources, such as sensors and monitoring devices. Consistency in analyzing these data remains a complex task due to railway systems’ dynamic and varied nature.

    Scalability and Flexibility: Another critical gap is ensuring that digital solutions are scalable and flexible to adapt to different railway environments. While promising models have been proposed, such as digital twins, more studies are needed to explore their scalability in various operational contexts.

    Interoperability and Standardization: A significant obstacle is a lack of standardization and interoperability between different technologies and systems. Both aspects hinder the effective integration of new technologies with legacy systems, which is crucial for efficient asset lifecycle management.

    Application of Advanced Data Models: Although progress has been made in collecting and analyzing large volumes of data, few studies have succeeded in integrating these data into operational models that facilitate dynamic criticality assessment and real-time predictions.

    Cybersecurity and Data Management: Secure data management and protection against cyber threats are growing concerns as more aspects of railway maintenance are digitized. Addressing these challenges is essential to maximize the potential of digitalization.

These gaps highlight the need for further research and development of solutions that enable more effective and secure integration of digital technologies in rail asset management. The table shows challenges and gaps in the literature (Table 3).

Table 3.
Literature Gaps and Challenges.

Table 3.
Literature Gaps and Challenges.

Gap IdentifiedDetailReferences
Holistic IntegrationNeed to combine various digital tools into a unified framework.[20,21]
Use of Real-Time DataDevelopment of methodologies for using real-time data
in dynamic criticality analysis.
[17,22]
Standardization and
Interoperability
Development of common standards and protocols
for the seamless integration of digital technologies.
[20]
Human and Organizational
Factors
Addressing the cultural and structural changes necessary
for digital transformation.
[23,24]
Cybersecurity and Data
Privacy
Ensuring the security and integrity of data in digital systems.[25]
Cost Benefit AnalysisConducting comprehensive cost-benefit analyses of digitalization.[3]
Scalability and FlexibilityDevelopment of scalable and flexible digital solutions to adapt
to different contexts.
[14]
This study is positioned as a significant contribution to the railway digitalization and asset management field, addressing several gaps identified in the previous research. One of the main contributions is creating an innovative data model that efficiently integrates digitalization in the criticality analysis of railway systems. This integration differs from previous research that has dealt with digitalization in a more fragmented way, without a clear ontological framework for criticality assessment. Furthermore, this contribution addresses the need for advanced methodologies to use real-time data, a gap highlighted by other researchers, such as [17,22], who failed to integrate these data into a dynamic criticality framework. This study seeks to close existing gaps by providing a practical framework for criticality assessment through digitization.

3. Methodology

This research proposes an extract, transform, and load) process to develop a data model that facilitates the digitalization of criticality analysis in railway maintenance management through the application of automated rules based on the inherent attributes of each asset. This approach creates an ontological framework that establishes the foundation for replicating this type of analysis across other railway networks. Unlike traditional methods, which rely on static information and expert judgment, our methodology addresses the need for a system capable of managing large-scale asset contexts and diverse operational environments by automating criticality assessment and enabling real-time data integration and analysis. Figure 1 provides a graphical representation of this framework and illustrates how its different elements interact, offering a comprehensive view of the process, which is further detailed throughout this paper.

3.1. Methodological Development

  • Description of the ETL process and AAS framework: The data model focuses on the process of extracting information from various sources, transforming it to ensure consistency and compatibility, and loading it into a master database that centralizes the data for analysis. This process begins with identifying and collecting operational and performance data from heterogeneous sources. The data are then normalized using relationship tables to create a subset that assigns values to variables for each asset based on asset type, operational context, and the required level of intervention. This approach is reinforced by the principles of the asset administration shell (AAS), which propose a standardized digital representation of the asset. This digital representation acts as a multi-layered structure containing information such as status, service condition, maintenance history, and any other relevant data that aid in developing a dynamic criticality analysis. As an example, Figure 2 represents the structure developed in our use case, where graphically it is possible to appreciate how the multiple attributes are fed for each asset from different sources “Operational Entities” through specific relationships. In the same way, from other sources of operational information (MTC), it is possible to establish, by inference from type of asset and reference system, data on the frequency of failures and operational affectation such as delayed trains or unavailability of the system.
  • Asset attribute characterization (AAC) and its implementation: Asset attribute characterization (AAC) is applied to clearly define the critical attributes that influence the criticality of assets. This process includes defining and analyzing variables containing attributes such as failure frequency, unavailability, and other performance indicators that directly affect the operability of railway systems. These attributes characterize the assets, elaborated from the operation (online or on-demand) or history data and integrated into the criticality analysis.

  • Procedures for criticality assessment and creation of digital assets: The procedures established for criticality assessment use algorithms that analyze asset attributes to calculate their potential impact based on predefined criteria. This assessment is visualized in a criticality matrix that segments assets according to risk levels and allows a clear visualization of intervention priorities. Model-specific business rules are executed from the master database values to automatically determine each asset’s criticality value. In the previous step, the developer must create the digital asset by replicating the physical and operational attributes of the assets in a standardized digital format.

  • Integration with enterprise asset management (EAM) systems: Integrating criticality scores and risk alerts into the EAM system initiates specific maintenance activities if significant changes in risk occur. For example, measuring the wear of a rail section transmitted by the monitoring train updates the EAM system, allowing the maintenance manager to determine that the section needs to be inspected and repaired promptly, thus avoiding significant problems in train operation.

  • Data Visualization and Modeling: The visualization is used to facilitate understanding of the process and decision-making, following diagrams such as Figure 2, which illustrates the data flow from extraction to criticality assessment. In addition, machine learning algorithms are envisaged to model and predict asset behavior, improving failure anticipation and maintenance strategy.
  • Continuous improvement: Finally, it is suggested that the master data set be enhanced with additional attributes, such as real-time health status measurement, and that advanced analysis techniques be employed to further optimize railway asset management. This methodology not only proposes an effective operational model but also sets a precedent for future improvements in digitalization and criticality analysis.

3.2. ETL Process for Rail Asset Assessment

The extract, transform, and load (ETL) process is crucial for efficiently handling and processing large volumes of data in railway infrastructure systems, ensuring that raw data from various sources can be transformed into actionable insights. In this section, we describe the key variables and steps involved in the ETL process, which supports the criticality assessment model. Section 3.2.1 defines the main variables—such as raw data, transformations, and the final load—that are used to standardize and structure the data for analysis. Section 3.2.2 details how the ETL process is implemented in practice, including extraction from multiple sources, transformation through normalization, filtering, and attribute creation, and the final load into a master table. Finally, Section 3.2.3 formalizes the complete ETL process, presenting the mathematical model used to integrate these steps and how it can be adapted to other railway systems.

3.2.1. Definition of Key Variables in the ETL Process

This section outlines the essential variables involved in the ETL process, including raw data, the transformations applied to it, and the final load into a centralized database. These variables form the basis of the data model that supports criticality assessment for railway assets

  • Raw Data (D):

    (D) represents raw data extracted from various sources, such as Excel files, SQL databases, inventory systems, and real-time IoT sensors.

    This data is organized in several tables (D1, D2…, Dn) where each table contains asset-specific information (e.g., it could contain inventory information, failure history, inventory data, asset information, etc.); (D1) could contain inventory information, (D2) failure history, etc.

  • Transformations (T):

    Transformations are the operations we apply to raw data to clean, normalize, and structure them into a valid format. These transformations can include the following:

    Normalization of heterogeneous data.

    Calculation of additional attributes (such as failure frequency or network categorization), further detailed in Section 4, where we discuss their role in determining both the failure frequency and failure consequences.

    Filtering of non-relevant records.

    The data can be enriched by cross-referencing different sources.

    Each transformation can be modeled as a function applied to the raw data:

((Tj(Di)) is a transformation applied on the data table (Di)

The final output of the ETL process is a centralized and standardized database called MASTER_TBL. This data master will contain all the key attributes necessary for the criticality assessment. The load of the transformed data can be expressed as

where the transformed table is loaded into the final structure.

3.2.2. Extraction, Transformation, and Load Process Phases (ETL)

This section details the individual phases of the ETL process, including data extraction from multiple sources, transformations to normalize and enrich the data, and the final loading of this transformed data into a master table for use in criticality analysis.

Rail asset data are extracted from a variety of sources (F1, F2, …, Fn), which may include the following:

  • SQL inventory databases (F1),

    Excel files with maintenance information (F2),

    IoT sensors (F3),

    GIS systems to locate assets (F4).

    Extraction is formalized as an operation: E(Di) = Di para i = 1,2, …, n

    Here, (E(Di)) represents the raw data extracted from the source (i), which will then be transformed.

The transformations are divided into several sub-processes:

  • Attribute Normalization: Normalize the raw data so all assets have a standard attribute format, (Di), so that all assets have a common attribute format. For example, if an attribute such as failure frequency is in different units or formats, it is transformed by a normalization function:

Tnorm(Di) = normalize (Di)

  • Calculation of New Attributes: In many cases, the system must create new attributes that are not present in the raw data. An example is calculating the unavailability time (IT) and the number of circulations per hour (TT). These new attributes can be modeled mathematically as

Here, the function (Tnew) applies calculations on (Di) to derive new attributes.

  • Data Filtering: Irrelevant or redundant data are removed during filtering. For example, non-maintainable assets are filtered out if only information on maintainable assets is required. This task can be modeled as

Tfilter(Di) = {xDi: x is maintainable asset}

  • Data Crossing: The ETL process also involves crossing different tables to generate a complete database. This cross can be formalized as

Tcross (Di, Dj) = DiDj

where (⋈) represents a join between the tables (Di) and (Dj) based on common keys (such as the asset identifier).

  • Load (L):

    Finally, the transformed and normalized data are loaded into a master table MASTER_TBL, which is the final result of the ETL process. This is expressed as

    The master table will contain all the calculated and transformed attributes, ready to be used in the criticality assessment. Each row in MASTER_TBL represents an asset, and each column represents one of the key attributes (failure frequency, impact, downtime, etc.).

3.2.3. Formalization of the Complete ETL Process

We can now merge the extraction, transformation and loading phases into a generalized model. The complete process for an asset (Ai) can be described as

MASTER_TBL = L (Tcross (Tfilter (Tnew (Tnorm(E(Di))))))

where

(E(Di)) extracts the raw data from the source (i);

(Tnorm), (Tnew), (Tfilter), y (Tcross) represent the normalization transformations, creation of new attributes, filtering of irrelevant data, and data crossover, respectively;

(L) is the final load operation in MASTER_TBL.

Applicability to Other Companies: This ETL process is easily adaptable to other railway undertakings. This process can be reused on any railway infrastructure by changing the specific data sources and adjusting the necessary transformations according to the type of assets and network characteristics. In addition, the ETL process can evolve as new data sources (such as advanced IoT sensors or predictive maintenance systems) are added, making it flexible and scalable.

3.2.4. ETL Example Table (Explanation of Table Elements)

This table gives a clear visual reference of how the data are processed and structured, illustrating each step of the ETL process

Original Data Type: The type of data as they are initially obtained from their source, for example, asset inventory, maintenance records, real-time sensor data, etc.

Data Source: The specific source of each data type, such as SQL databases, Excel files, IoT devices, or GIS systems.

Transformation Applied: The transformation applied to the original data to structure it for the final data model (normalization, filtering, metric conversion, etc.).

Transformed Data: The final result after transformation, ready to be loaded into the central database or used in criticality analysis. See examples in Table 4.

3.3. Generalized Mathematical Model for Criticality Assessment

The model is based on the main parameters affecting railway assets, such as the frequency of failures, the impact of these failures on key dimensions, and the operational characteristics of the asset, as described in detail in Section 4, where specific examples of how these attributes influence criticality are presented.

3.3.1. Definition of Variables

  • Failure Frequency (FF): (FFi) is the frequency with which an asset (i) fails in a given period of time (e.g., failures per year). This frequency can be derived from historical maintenance data or predictions based on the type of asset and its operating environment.

  • Consequences of Failure (C): (Ci) is the impact that the failure of an asset would have on different (i) in different dimensions. It can be modeled as a vector of consequences:

Ci = [Csafety, Cenvironment, Cservice, Ccosts])

where each component reflects the consequence on a specific dimension (safety, environment, quality of service, operational cost overruns).

3.

Asset Criticality (CR): (CRi) is the total criticality of an asset (i), which combines the frequency of failure with the consequences of those failures. The result is the key value that will be used to prioritize the most critical assets in maintenance.

4.

Operational Characteristics of the Asset (A): (Ai = [a1, a2,…, an]) is a set of attributes describing the asset (i) such as its location (path or station), type of network, traffic speed, whether it is in a curve or tunnel, etc. These attributes affect both the frequency of failure and the consequences of failure.

3.3.2. General Criticality Formula

The criticality of an asset (i) can be modeled mathematically as follows:

CR i = FF i ( A i ) j = 1 k w j C i , j ( A i )

where

FF i ( A i ) is the failure frequency of the asset (i) depending on operational characteristics ( A i ) ,

C i , j ( A i ) is the value of the consequence of the failure in the dimension (j) depending on operational characteristics ( A i ) (e.g., safety, environment, etc.),

(wj) is a weight assigned to each dimension (j) to reflect its relative importance in the criticality analysis,

(k) is the number of dimensions considered (in this case, 4: safety, environment, service, costs).

The weights (wj) can be adjusted according to the strategic importance of each dimension for the railway undertaking; the methodology for determining these weights is explained in Section 3.3.4. For example, for a company with a strong focus on safety, the weight will be higher than for the other dimensions. (wsafety) will be greater than that of the other dimensions.

3.3.3. Frequency and Consequence Models

Failure Frequency (FF): The frequency of failures (FFi) can be modeled using historical data or predictions based on asset attributes, such as operating time, number of runs, or track type. A standard model for failure frequency function based on asset characteristics is as follows:

FFi = α + β1α1 + β2α2 + ⋯ + βnαn

where (α) is a base constant, and (β) are coefficients that adjust the impact of each characteristic on the failure frequency (an). (The choice of a linear regression model for predicting failure frequency (FF) was made due to its simplicity and interpretability, especially when the relationship between variables is approximately linear and the data are sufficient. We also acknowledged that alternative models, including machine learning approaches such as decision trees and non-linear regression, are being explored by our research team for future studies. However, these models typically require larger datasets and greater computational resources, which are beyond this study’s scope)

Consequences (C): Each consequence can be modeled (Ci,j) using historical failure impact data or by simulations of potential failures. A standardized scale of consequences (e.g., from 1 to 10) can be used for each dimension. The weighted sum of the consequences in the different dimensions gives the total impact of an asset failure; these consequences can also be simplified into qualitative equivalents such as high, medium, low, and unacceptable.

3.3.4. Generalization Considerations

Adaptation of Weights: The model is easily adaptable to different railway companies by adjusting the weights (wj) according to the company’s priorities. A company that values service quality more highly may assign a higher weighting to (wservice), while one that is more concerned about the environment might give a higher weight to (wenvironment). The assignment of weights and consequences is typically carried out through an expert consensus methodology, in which relevant company areas determine which dimension has the greatest impact on their business. It is worth noting that this distribution may vary in the medium term; however, in the short term, it remains stable, as is common in any risk analysis.

Integration of Digital Technologies: The implementation of this model can be realized in a maintenance management system (EAM), where digital twins of each asset would allow for real-time monitoring of the values of (FFi) and (Ci) values, recalculating criticality continuously to adapt to changing operational conditions. In the absence of integrated systems, the data on asset operating characteristics must be extracted from a data model that consistently relates the operational context to the specific asset. These relationships may not be direct, so it will be necessary to use relationship tables to interpret the source data about the asset. Each of these sub-models can then be attached to each asset to make the information available on demand for future queries.

In this study, the developed criticality model does not consider failure interdependence, that is, the cascading effect that a failure in one component can have on others. Although a factorial analysis of failure interdependence would be valuable, it is considered beyond the scope of this work due to the added complexity and specific data requirements on asset dependency. However, the analysis based on the loss of functionality of the higher-level maintainable asset, UFS (unit functional of system, as per Rodriguez et al. [26]), partially compensates for this interdependence, as it is reflected in practice through the functional loss of the UFS, even if not explicitly modeled. In the applied case study, one of the considerations in the ETL process includes an inherited failure model, in which failures of secondary components are transferred to primary components when these failures cause a system-wide malfunction. Nevertheless, we recognize that the impact of partial failures, which do not entirely stop the system, is not captured in this model and could be explored further. We suggest that future research consider integrating interdependence models, such as those based on reliability block diagrams (RBD), Markov chains, and other systemic analysis models, to provide a more comprehensive risk assessment in complex railway systems.

Scenario Simulations: The model can be used to simulate different maintenance scenarios, calculating how criticality would change if preventive interventions were or were not carried out on certain assets. This is especially useful for planning maintenance investments.

3.3.5. Simplified Practical Example

Let us assume an asset (i) with the following values:

Data, attributes, and conditions:

Asset i, condition FF1: If the asset is located in location Az (for example, Line C4, Kilometer 16, and is also type rail, then FF = 2

Asset i, Condition CC1: If the asset is Attribute A1 [rail type], and if Attribute2 [zone] = main track, then Ci,safety = 8.

Asset i, Condition CC2: If the asset in Attribute A3 [area] = Protected, then Ci,environment = 3

Asset i, Condition CC3: If the asset in Attribute A4 [time] = Peak, then Ci,service = 7

Active i, Condition CC4 If active Attribute A5 belongs to [class] Rail and Attribute A6 [failure mode] is Breakage then Ci,cost = 5.

Failure Frequency: FFi = 2 failures per year.

Consequences on Safety: Ci,safety = 8

Consequences on Environment: Ci,environment = 3

Consequences on Service: Ci,service = 7

Consequences on Costs: Ci,cost = 5

Simplifying, if the weights are (wsafety = 0.4), (wenvironment = 0.2), (wservice = 0.3), and (wcosts = 0.1), then the criticality would be

CRi = 2 ⋅ (0.4 ⋅ 8 + 0.2 ⋅ 3 + 0.3 ⋅ 7 + 0.1 ⋅ 5) = 2 ⋅ (3.2 + 0.6 + 2.1 + 0.5) = 2 ⋅ 6.4 = 12.8

This criticality asset value could be compared with other asset values to decide maintenance priorities. For further understanding, Figure 3 shows a real example of this methodology applied in the development of the use case. Appendix A shows the Python script developed.

3.4. Proposed Procedure

In the context of railway infrastructure maintenance, this study applies a digitalized process of criticality analysis, as detailed in the case study in Section 4. This process involves the implementation of a customized data model to enhance maintenance decision-making. The criticality-based maintenance model is specifically applied to a railway track system, where maintenance managers must prioritize activities based on the importance rating of the equipment. In the following sections, the steps of this approach are implemented:
  • Data Collection: The first phase consists of obtaining updated data on demand from different points, e.g., tracking maintenance history, asset classification, asset inventory, section criteria, operational definitions, and others.

  • Data processing: After collecting the data, they are processed and converted to a commonly understandable format according to each asset type’s respective business rules, considering that the railway system has a heterogeneous asset typology. In this sense, each asset type has a specific rule and sphere of influence. In this step, we define the relationship between each operational entity and several relationship tables to prepare the query that will provide the attribute values for each asset.

  • Sub-model encapsulation: Within the context of AAS, these processed data are encoded into sub-models, allowing for a current digital reproduction of each asset’s state in time. To avoid manual data processing, we attach to each asset a shell that collects the entities and assigns them a model of inputs and outputs that can then be automatically run to obtain the normalized information on demand.

  • Criticality Assessment: Sophisticated algorithms with many rules process the data, and scores are determined for each asset. The above scores indicate how vital the security effects, performance, and cost impacts on the given assets will be.

  • Risk Alerts and Dashboard: With up-to-date information, the system can automatically produce risk alerts on highly critical assets based on their criticality scores, which are displayed on an easy-to-understand dashboard. It helps maintenance managers decide which assets should be prioritized for maintenance. In the case of criticality assessment, real-time data are not necessary because the objective is a strategic assessment. Still, data availability enables the system to apply condition-based techniques or health monitoring tools, of course, with other output models.

  • Integration with the EAM System: This integrates criticality scores and risk alerts into the EAM system, initiating specific maintenance activities if major changes in risk occur.

For example, we can analyze a part of a railway track that has degraded at different levels. This wear is measured by the monitoring train that transmits data on the new condition of the track segment. The data model receives these data, processes and analyses it with updated data, and is available for any engineer to demand updated data to reassess the criticality of the line. In addition, these data can be contained in AAS, which scores critically high for some track lengths due to abnormal patterns and large deviations in the segment in question [27]. The system issues a risk alert, which can be viewed on the maintenance control panel. After reviewing the panel in the EAM, the maintenance manager determined that the section above must be inspected and repaired promptly, as it presents a significant threat to the operation of the train. This decision updates the EAM system and dispatches a maintenance team to address the problem promptly, thus preventing potential incidents.

4. Case Study

4.1. Scenario and Application

The selected case study focuses on a commuter rail network covering approximately 700,000 assets. One of the busiest lines within this vast network was chosen for a detailed criticality analysis. Initially, over 12,000 elements were assessed on this line (source: combined SQL queries from master data of Asset Inventory Database), but after a thorough review, the analysis focused on 4018 key assessable assets (from the manual and semi-automatic filtered depurations process over the original query’s results). The remaining assets were linked to these key assets through parent–child relationship structures, allowing the data collected from the secondary assets to contribute to the analysis of the performance of the primary assets. The criticality analysis, based on operational attributes such as location, network type, and traffic speed, as introduced in Section 3.3, was conducted in several stages, starting with extracting historical data on asset performance, covering six years and accumulating more than 200,000 failure events (source: Maintenance Corrective System dedicates Software). These events were subjected to a segmentation and filtering process to select 13,000 relevant records for the study. However, the descriptions of the failures were performed in text (no code), which made a more in-depth analysis of the failure modes challenging. Clear criteria allowed the derivation of meaningful values at the element level for each section of the assessed network. One of the most significant challenges was the lack of accurate data on maintenance costs in the enterprise resource planning) system, which limited the analysis in that specific dimension. A detailed process of creating and populating the data master (master table) was implemented to facilitate the criticality analysis, serving as the central asset assessment database. Specifically, the data model and the master table were applied to promote a comprehensive analysis of the criticality of railway assets:
  • The data model provided an organized structure for capturing and analyzing detailed information about each asset, including attributes such as type, location, current status, and maintenance history. This structure was essential to understanding the relationships between the different components of the network and how the failure of one component could affect others.

  • The master table was created as a centralized database that integrated data from various sources, such as inventories and maintenance records, through SQL queries. This process allowed the extraction and combination of crucial data, such as failure frequency and asset classes, resulting in a rich and detailed database for criticality assessment.

Using the master table allowed for automating the calculation of asset criticality, assigning a specific value to each asset based on its characteristics and operational context. This automatization was achieved by integrating various relationship tables that identified the levels of intervention required for each type of asset. In addition, cross-referencing with other key tables was performed to obtain specific characteristics, such as network type and station category, which were essential to associate attributes with specific network assets correctly. In other words, the data model and the master table facilitated a more dynamic and effective criticality analysis, allowing the prioritization of interventions and asset management optimization in the rail network.

Specific data covering various dimensions of the railway system were used in the practical application. These data included track maintenance history, asset classification, asset inventory, section criteria, and operational definitions. These data were collected on demand from different points, ensuring the information was up-to-date and relevant to the criticality analysis (Table 5). The data collection process started by obtaining information from various sources, such as maintenance records, asset inventories, and sensors installed on the railway infrastructure. These data were transformed and standardized to ensure their consistency and usefulness, converting them into standard formats that the asset management system could use.

Similarly, data capture would be carried out for the rest of the characteristics, such as:

  • Type of substation

  • Slope height and distance

  • Maximum altitude (altitude)

  • Type of high-voltage line

  • If the maintainable asset is located in a tunnel

  • If the maintainable asset is on a curve

  • If the maintainable asset is located in an environmentally protected area

  • If the maintainable asset is at an upper step

  • Unavailability time (UT) and failure frequency (FF) for each station or route

  • Average hourly traffic

Data collection was a comprehensive process that involved obtaining information from multiple sources, transforming it into standard formats, and integrating real-time data for a more dynamic and accurate analysis of railway assets’ criticality. Once the data were collected and linked, customized business rules were implemented for each type of asset in railway systems.

An example of the rules applied is shown in Figure 3, where four levels of severity are defined: low, half, high, and inadmissible, specifically for one of the safety factors. In this case, the severity is determined based on the value of multiple attributes of the assets, such as
  • Class: Classification of the asset according to its function or criticality in the railway network.

  • Type_Via: The type of track where the asset is located, which can influence criticality and the risk of failure.

  • Track Device: Track-installed devices that can modify the behavior of the asset or influence its probability of failure.

Figure 3 provides a clear visual representation of how severity levels are determined based on asset attributes, allowing readers to understand how assets are prioritized for maintenance. By including key factors such as the Class, Type_Via, and Track Device, the figure reinforces the idea that the criticality of assets depends not only on their individual state but also on their context within the rail network.

The key results obtained from the criticality analysis in the rail network case study were significant for asset management and optimization. Some of them are summarized as follows:

  • Signaling systems:

    More than 500 systems assessed.

    Average frequency of failures: 1.2 incidents per year.

    Average delays: 45 min, affecting more than 100 trains and 60,000 passengers daily.

  • Track devices:

    More than 700 evaluated.

    Frequency of failures: 0.4 per year.

    Serious consequences: traffic disruption on more than 15 km sections, affecting 20,000 passengers and generating costs of 50,000 euros per disruption.

  • Electrical substations:

    More than 320 assessed.

    Infrequent but critical failures.

    Impact: paralysis of sections of up to 50 km, affecting 200,000 passengers and costs of 100,000 euros per hour of inactivity.

    Proposal: Real-time monitoring systems will reduce downtime by 40%.

  • Digitalization:

    Key for real-time data update.

    Digital twins for continuous monitoring and fault simulations.

In short, following the methodology proposed by Parra et al. [28], this application resulted in (i) a 25% reduction in unplanned downtime, (ii) improved service availability, (iii) a 15% reduction in operating costs, and (iv) estimated savings of €2 million in the first year. In other words, we can see how digitalization and criticality analysis transform asset management by enabling prioritization of critical assets and optimization of resources, resulting in improved operational reliability and reduced costs. This strategy effectively improves safety and efficiency, providing valuable guidance for replication in other critical infrastructures. During the case study, several significant challenges were faced that impacted the implementation of criticality analysis on the rail network. One of the main challenges was data integration and data quality. The effectiveness of criticality analysis depends on accurate, complete, and up-to-date data. However, the available data were fragmented and came from different sources and systems that were not always integrated, making it difficult to use effectively. Another major challenge was the complexity and heterogeneity of the railway network. The network comprised various interdependent components, each with its technical characteristics, criticality levels, and maintenance requirements. This interdependence required a methodology that could consider the criticality of each element and the system as a whole. An example of the criticality matrix obtained is shown in Figure 4.

In addition, the technological infrastructure needed to implement advanced criticality analysis presented significant challenges. Robust data management systems, specialized software, and advanced analysis tools such as digital twins and machine learning algorithms were required. Procurement, installation, and maintenance of this infrastructure proved costly and technically complex. Finally, training and change management were critical challenges. Adopting new methodologies and technologies involved a significant change in how asset managers and operational staff carried out their work. This change required a considerable commitment to training and organizational change to ensure that staff were technically trained and aligned with the strategic objectives of the data-driven approach.

4.2. Rule Automation

In this work, an automated criticality rule evaluation process was developed by implementing a Python 13.10 script. Initially defined in Excel formula format, these rules were translated into nested conditionals in Python for dynamic evaluation on a dataset of maintainable assets. Each asset has different attributes that condition its criticality level in several dimensions, such as failure frequency, safety, environmental impact, and operational cost overruns. The implemented Python code evaluates each criticality rule according to the values of these attributes, allowing complex decision logic to be applied efficiently and consistently throughout the database. The conditional structure of the code replicates the rule-based decisions originally ex-pressed by Excel formulas but with greater flexibility and scalability. The generalized code that performs this evaluation is presented in Appendix A.

5. Discussion

During the case study, several significant challenges were faced that impacted the implementation of criticality analysis on the rail network. One of the main challenges was data integration and data quality. The effectiveness of criticality analysis depends on accurate, complete, and up-to-date data. However, the available data were fragmented and came from different sources and systems that were not always integrated, making it difficult to use effectively. Another major challenge was the complexity and heterogeneity of the railway network. The network comprised various interdependent components, each with its technical characteristics, criticality levels, and maintenance requirements. This interdependence required a methodology that considered the criticality of each element and the system as a whole. In addition, the technological infrastructure needed to implement advanced criticality analysis presented significant challenges. Robust data management systems, specialized software, and advanced analysis tools such as digital twins and machine learning algorithms were required. Procurement, installation, and maintenance of this infrastructure proved costly and technically complex.

Finally, training and change management were critical challenges. Adopting new methodologies and technologies involved a significant change in how asset managers and operational staff carried out their work. This adoption required a considerable commitment to training and organizational change to ensure that staff were technically trained and aligned with the strategic objectives of the data-driven approach. Digitalization significantly improved operational efficiency and decision-making in rail asset management through several key strategies. First, automating data collection and processing allowed for a smoother and more accurate information integration. Sensors installed in the rail infrastructure collected real-time data on asset status, eliminating the need for manual input and reducing the risk of errors. These sensors facilitated a faster and more accurate response to maintenance and operational needs. In addition, the implementation of digital twins provided virtual representations of the physical assets, continuously updated with real-time data. These digital twins allowed asset managers to simulate different operational scenarios and anticipate potential failures before they occurred, resulting in a more proactive approach to maintenance and asset management. The ability to anticipate possible problems improved planning and reduced unplanned downtime, optimizing operational efficiency.

Digitalization also promoted interoperability and connectivity between asset management systems, databases, and analysis tools. This connectivity facilitated data integration and provided a more complete and coordinated view of the entire network, improving operational efficiency and decision-making. By sharing and using data effectively, organizations were able to improve cooperation and efficiency in a complex, multinational environment. Furthermore, implementing the data model transformed asset management by providing a powerful tool for continuous criticality assessment, optimizing and preventive maintenance, and improving the safety and reliability of railway operations. The results obtained by implementing data modeling and digitalization in railway asset management have several advantages over traditional methods. Firstly, data-driven methods offer a more accurate and reliable asset criticality assessment due to real-time and historical data, which allow for identifying patterns and trends that may not be evident with more conventional methods.

In addition, traditional methods often rely on manual inspections and qualitative assessments, which can lead to decisions based on assumptions or experience. In contrast, the digitalized approach allows for quantitative and objective evaluation, facilitating prioritizing maintenance actions based on concrete data. This prioritization optimizes resource allocation and improves operational efficiency by reducing unplanned downtime by 25%. Digitalization enables greater adaptability and agile response to changes in the network and operating environment, something that traditional methods cannot match due to their rigidity. Simulating failure scenarios and planning proactive interventions are other significant advantages, as they reduce service interruptions and improve availability. Finally, interoperability and data integration in a unified framework facilitates a more complete and coordinated view of the network, something that traditional methods, with their partitioned systems, cannot provide. In short, using data models outperforms conventional methods in accuracy, efficiency, and responsiveness, significantly improving asset management. The study has several limitations that could be addressed in future research to improve the implementation and effectiveness of the data model in railway asset management. For example:

  • Complexity and heterogeneity of railway networks complicate data integration and application of the criticality model.

  • Risk of information overload due to the large amount of data available.

  • Advanced data filtering and analysis techniques are needed to prioritize critical information.

  • High initial implementation costs for advanced technology and staff training.

  • Scalability and flexibility of digital solutions require more attention.

  • Develop scalable models that are adaptable to different operational contexts and infrastructures.

  • Creation of standardized frameworks to facilitate the integration of digital technologies in various rail networks.

In summary, addressing these limitations through future research could significantly improve the effectiveness and efficiency of the data model in rail asset management. For the practical implementation of the digitalization-based approach and the data model for railway asset management, several key recommendations can be made:

  • Developing a Standardized Framework: It is crucial to develop standardized frameworks and protocols that facilitate the integration of various digital technologies in criticality assessment. This development will help overcome data integration challenges and ensure that systems are interoperable.

  • Training and Change Management: Adopting new technologies requires significant organizational change. Implementing training programs is essential to ensure that staff are technically trained and aligned with the strategic objectives of the data-driven approach.

  • Cost–Benefit Analysis: Conducting a detailed cost–benefit analysis is essential to justify investments in digitalization. This analysis will help organizations understand the long-term benefits and cost savings of implementing these technologies.

  • Optimizing Data Quality: Ensuring data accuracy, completeness, and timeliness is essential for practical criticality analysis. Robust data collection and management processes must be established to minimize discrepancies that can lead to erroneous assessments.

  • Implementing advanced technologies: Advanced tools such as digital twins and machine learning algorithms can significantly improve criticality analysis. These technologies enable dynamic, real-time analysis, optimizing fault identification and prioritization.

  • Cybersecurity and Data Privacy: It is critical to implement cybersecurity measures to protect the integrity of information used in criticality analysis, ensuring that the data are secure and used ethically.

  • Application in Real-Time Environments: We expanded the discussion to address the potential for integrating real-time data sources into our model. Specifically, real-time traffic conditions and geometric measurements from inspection trains could be incorporated to enhance the accuracy and immediacy of the criticality assessment process. By including these real-time data sources, we aim to minimize delays in data-driven decision-making, which, in the current system, can sometimes exceed 12 months. This extended delay can impact the responsiveness of maintenance and criticality evaluation, ultimately affecting the safety and efficiency of railway operations. In addition, new IoT systems currently under development hold significant promise for further enhancing the model’s responsiveness. These IoT systems could enable real-time updates on critical infrastructure components, such as level crossings and electrification systems. Such an advancement would allow the model to process up-to-the-minute information on these essential assets, capturing dynamic changes in asset conditions and usage levels that static or delayed data may not reflect. By integrating these real-time data, the model can provide more accurate and timely insights into the state of railway assets, thereby supporting proactive maintenance decisions and reducing the risk of unexpected failures.

These recommendations can facilitate successful implementation and maximize the positive impact of a digital approach to rail asset management.

Next steps: The integration of artificial intelligence (AI) in railway maintenance presents considerable advantages for asset management, enhancing the predictive and preventive capabilities essential for operational efficiency and safety. AI can analyze large volumes of real-time and historical data to forecast failures, detect anomalies, and improve maintenance scheduling with a higher degree of accuracy than traditional methods. For instance, machine learning models can identify wear patterns and potential failure points, allowing maintenance teams to preemptively address issues that may otherwise lead to service disruptions. Moreover, AI-powered simulations can model various operational scenarios, providing valuable insights for optimizing maintenance resources. However, implementing AI in this context also poses significant challenges, such as ensuring the availability of high-quality data, developing standardized AI protocols for safety, and overcoming the compatibility issues with legacy systems. Addressing these challenges is essential to fully leverage AI’s potential in improving the resilience and reliability of railway infrastructure but is necessary to develop a strong structure of data and data models to develop really useful applications using AI’s potential.

6. Conclusions

This study has demonstrated the transformative impact of digitization and criticality analysis on rail asset management. Integrating a vector of operational attributes, such as failure frequency, unavailability time, asset location, and traffic speed, makes it possible to assess the criticality of assets dynamically in real-time. This data-driven approach allows for the prioritization of critical assets, optimizing resource allocation and improving operational reliability, which is essential for the efficient management of critical infrastructure.

The research question focused on whether a systematic method based on digitalization and real-time data could provide more accurate and timely criticality evaluations than traditional static methods. The results confirm that by continuously updating the vector of attributes for each asset, we can more precisely calculate critical parameters, such as failure probability and potential consequences. This method improves maintenance efficiency by focusing on the most critical assets, leading to a 25% reduction in unplanned downtime and a 15% decrease in operating costs.

Integrating advanced technologies, such as digital twins and real-time monitoring, further enhances this approach, allowing for continuous updates and re-calculation of criticality as asset conditions evolve. This dynamic assessment model ensures better decision-making, safety, and long-term sustainability of rail operations.

In conclusion, developing a practical ontological framework for criticality assessment centered on the vector of operational attributes provides a theoretical and practical guide that can be adapted to various railway contexts. This framework addresses the increasing complexity of rail infrastructure management, offering an effective strategy for the future of asset management in critical environments.



Source link

Mauricio Rodríguez Hernández www.mdpi.com