Results of HI estimation using SOTA models#

This notebook presents experimental results of hi estimation on the Milling dataset using the model MLP-256.

Importing libraries.

from ice.health_index_estimation.datasets import Milling
from ice.health_index_estimation.models import MLP, TCM, IE_SBiGRU, Stacked_LSTM

import pandas as pd
import numpy as np
import torch
from tqdm.auto import trange
    
C:\Users\user\conda\envs\ice_testing\Lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

Initializing model class and train/test data split

dataset_class = Milling()

data, target = pd.concat(dataset_class.df), pd.concat(dataset_class.target) 
test_data, test_target = dataset_class.test[0], dataset_class.test_target[0]
Reading data/milling/case_1.csv: 100%|██████████| 153000/153000 [00:00<00:00, 1268689.48it/s]
Reading data/milling/case_2.csv: 100%|██████████| 117000/117000 [00:00<00:00, 1248873.41it/s]
Reading data/milling/case_3.csv: 100%|██████████| 126000/126000 [00:00<00:00, 1276983.81it/s]
Reading data/milling/case_4.csv: 100%|██████████| 63000/63000 [00:00<00:00, 1374151.83it/s]
Reading data/milling/case_5.csv: 100%|██████████| 54000/54000 [00:00<00:00, 1290003.79it/s]
Reading data/milling/case_6.csv: 100%|██████████| 9000/9000 [00:00<00:00, 1003395.34it/s]
Reading data/milling/case_7.csv: 100%|██████████| 72000/72000 [00:00<00:00, 1245529.71it/s]
Reading data/milling/case_8.csv: 100%|██████████| 54000/54000 [00:00<00:00, 1321487.68it/s]
Reading data/milling/case_9.csv: 100%|██████████| 81000/81000 [00:00<00:00, 1377467.66it/s]
Reading data/milling/case_10.csv: 100%|██████████| 90000/90000 [00:00<00:00, 1347769.63it/s]
Reading data/milling/case_11.csv: 100%|██████████| 207000/207000 [00:00<00:00, 1180064.87it/s]
Reading data/milling/case_12.csv: 100%|██████████| 126000/126000 [00:00<00:00, 1276980.73it/s]
Reading data/milling/case_13.csv: 100%|██████████| 135000/135000 [00:00<00:00, 1242669.46it/s]
Reading data/milling/case_14.csv: 100%|██████████| 81000/81000 [00:00<00:00, 1310826.20it/s]
Reading data/milling/case_15.csv: 100%|██████████| 63000/63000 [00:00<00:00, 1316886.37it/s]
Reading data/milling/case_16.csv: 100%|██████████| 18000/18000 [00:00<00:00, 1128731.62it/s]
C:\Users\user\conda\envs\ice_testing\Lib\site-packages\scipy\interpolate\_interpolate.py:479: RuntimeWarning: invalid value encountered in divide
  slope = (y_hi - y_lo) / (x_hi - x_lo)[:, None]
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import pandas as pd 

scaler = MinMaxScaler()
trainer_data = scaler.fit_transform(data)
tester_data = scaler.transform(test_data)

trainer_data = pd.DataFrame(trainer_data, index=data.index, columns=data.columns)
tester_data = pd.DataFrame(tester_data, index=test_data.index, columns=test_data.columns)
# path_to_tar = "hi_sota/"
model_class = Stacked_LSTM(
        window_size=64,
        stride=1024, # 1024
        batch_size=253, # 256
        lr= 0.0031789041005068647, # 0.0004999805761074147,
        num_epochs=55,
        verbose=True,
        device='cuda'
    )
# model_class.fit(trainer_data, target)
model_class.load_checkpoint(path_to_tar + "stack_sota.tar")
model_class.evaluate(tester_data, test_target)
Creating sequence of samples: 100%|██████████| 14/14 [00:00<00:00, 2809.31it/s]
                                                             
{'mse': 0.0022332468596409335, 'rmse': 0.047257241346072384}
model_class = TCM(
        window_size=64,
        stride=1024, # 1024
        batch_size=253, # 256
        lr= 0.0031789041005068647, # 0.0004999805761074147,
        num_epochs=55,
        verbose=True,
        device='cuda'
    )
# model_class.fit(trainer_data, target)
model_class.load_checkpoint(path_to_tar + "TCM_sota.tar")
model_class.evaluate(tester_data, test_target)
Creating sequence of samples: 100%|██████████| 14/14 [00:00<00:00, 3511.98it/s]
                                                             
{'mse': 0.004014168163365719, 'rmse': 0.06335746335962102}

Training and testing with difference random seed for uncertainty estimation

model_class = IE_SBiGRU(
        window_size=64,
        stride=1024, # 1024
        batch_size=253, # 256
        lr= 0.0011, # 0.0004999805761074147,
        num_epochs=35,
        verbose=True,
        device='cuda'
    )
# model_class.fit(trainer_data, target)
model_class.load_checkpoint(path_to_tar + "IE_SBiGRU_sota.tar")
model_class.evaluate(tester_data, test_target)
Creating sequence of samples: 100%|██████████| 14/14 [00:00<00:00, 2341.13it/s]
                                                            
{'mse': 0.004956771691496658, 'rmse': 0.07040434426579555}