Remaining useful life models#
BaseRemainingUsefulLifeEstimation#
- class ice.remaining_useful_life_estimation.models.base.BaseRemainingUsefulLifeEstimation(window_size: int, stride: int, batch_size: int, lr: float, num_epochs: int, device: str, verbose: bool, name: str, random_seed: int, val_ratio: float, save_checkpoints: bool)[source]#
Bases:
BaseModel
,ABC
Base class for all RUL models.
- Parameters:
window_size (int) – The window size to train the model.
stride (int) – The time interval between first points of consecutive sliding windows in training.
batch_size (int) – The batch size to train the model.
lr (float) – The larning rate to train the model.
num_epochs (float) – The number of epochs to train the model.
device (str) – The name of a device to train the model. cpu and cuda are possible.
verbose (bool) – If true, show the progress bar in training.
name (str) – The name of the model for artifact storing.
random_seed (int) – Seed for random number generation to ensure reproducible results.
val_ratio (float) – Proportion of the dataset used for validation, between 0 and 1.
save_checkpoints (bool) – If true, store checkpoints.
MLP#
- class ice.remaining_useful_life_estimation.models.mlp.MLP(window_size: int = 32, stride: int = 1, hidden_dim: int = 512, batch_size: int = 64, lr: float = 0.0001, num_epochs: int = 15, device: str = 'cpu', verbose: bool = True, name: str = 'mlp_cmapss_rul', random_seed: int = 42, val_ratio: float = 0.15, save_checkpoints: bool = False)[source]#
Bases:
BaseRemainingUsefulLifeEstimation
Multilayer Perceptron (MLP) consists of input, hidden, output layers and ReLU activation. Each sample is reshaped to a vector (B, L, C) -> (B, L * C) where B is the batch size, L is the sequence length, C is the number of sensors.
- Parameters:
window_size (int) – The window size to train the model.
stride (int) – The time interval between first points of consecutive sliding windows.
hidden_dim (int) – The dimensionality of the hidden layer in MLP.
batch_size (int) – The batch size to train the model.
lr (float) – The larning rate to train the model.
num_epochs (float) – The number of epochs to train the model.
device (str) – The name of a device to train the model. cpu and cuda are possible.
verbose (bool) – If true, show the progress bar in training.
name (str) – The name of the model for artifact storing.
random_seed (int) – Seed for random number generation to ensure reproducible results.
val_ratio (float) – Proportion of the dataset used for validation, between 0 and 1.
save_checkpoints (bool) – If true, store checkpoints.
LSTM#
- class ice.remaining_useful_life_estimation.models.lstm.LSTM(window_size: int = 32, stride: int = 1, hidden_dim: int = 512, hidden_size: int = 256, num_layers: int = 2, dropout_value: float = 0.5, batch_size: int = 64, lr: float = 0.0001, num_epochs: int = 35, device: str = 'cpu', verbose: bool = True, name: str = 'mlp_cmapss_rul', random_seed: int = 42, val_ratio: float = 0.15, save_checkpoints: bool = False)[source]#
Bases:
BaseRemainingUsefulLifeEstimation
Long short-term memory (LSTM) model consists of the classical LSTM architecture stack and two-layer MLP with SiLU nonlinearity and dropout to make the final prediction.
Each sample is moved to LSTM and reshaped to a vector (B, L, C) -> (B, hidden_size, C) Then the sample is reshaped to a vector (B, hidden_size, C) -> (B, hidden_size * C)
- Parameters:
window_size (int) – The window size to train the model.
stride (int) – The time interval between first points of consecutive sliding windows.
hidden_dim (int) – The dimensionality of the hidden layer in MLP.
hidden_size (int) – The number of features in the hidden state of the model.
num_layers (int) – The number of stacked reccurent layers of the classic LSTM architecture.
batch_size (int) – The batch size to train the model.
lr (float) – The larning rate to train the model.
num_epochs (float) – The number of epochs to train the model.
device (str) – The name of a device to train the model. cpu and cuda are possible.
verbose (bool) – If true, show the progress bar in training.
name (str) – The name of the model for artifact storing.
random_seed (int) – Seed for random number generation to ensure reproducible results.
val_ratio (float) – Proportion of the dataset used for validation, between 0 and 1.
save_checkpoints (bool) – If true, store checkpoints.
IR#
- class ice.remaining_useful_life_estimation.models.ir.IR(window_size: int = 32, stride: int = 1, noise: float = 0.012097363825546333, num_layers: int = 3, dropout_value: float = 0.2, batch_size: int = 256, lr: float = 0.004091793998895119, num_epochs: int = 27, device: str = 'cpu', verbose: bool = True, name: str = 'ir_model', random_seed: int = 42, val_ratio: float = 0.15, save_checkpoints: bool = False, dim_feedforward: int = 154)[source]#
Bases:
BaseRemainingUsefulLifeEstimation
Long short-term memory (LSTM) model consists of the classical LSTM architecture stack and two-layer MLP with SiLU nonlinearity and dropout to make the final prediction.
Each sample is moved to LSTM and reshaped to a vector (B, L, C) -> (B, hidden_size, C) Then the sample is reshaped to a vector (B, hidden_size, C) -> (B, hidden_size * C)
- Parameters:
window_size (int) – The window size to train the model.
stride (int) – The time interval between first points of consecutive sliding windows.
noise (float) – The input noise for model training
hidden_dim (int) – The dimensionality of the hidden layer in MLP.
hidden_size (int) – The number of features in the hidden state of the model.
dropout_value (float) – Dropout probability in model layers
num_layers (int) – The number of stacked reccurent layers of the classic LSTM architecture.
batch_size (int) – The batch size to train the model.
lr (float) – The larning rate to train the model.
num_epochs (float) – The number of epochs to train the model.
device (str) – The name of a device to train the model. cpu and cuda are possible.
verbose (bool) – If true, show the progress bar in training.
name (str) – The name of the model for artifact storing.
random_seed (int) – Seed for random number generation to ensure reproducible results.
val_ratio (float) – Proportion of the dataset used for validation, between 0 and 1.
save_checkpoints (bool) – If true, store checkpoints.
dim_feedforward (int) – The dimension of feedforward transformer encoder layer