본문 바로가기
PYTHON

20200325-1 파이썬 머신러닝 (CT 촬영(폐) 원하는 부분 추출)

by 낫싱 2020. 3. 25.
728x90
반응형

CT_lung_image_0324호지수 - Jupyter Notebook.pdf
7.11MB
CT_lung_image_0324호지수.ipynb
1.80MB

import numpy as np # 행렬연산 패키지

import matplotlib.pyplot as plt # 그래프 패키지

 

#딥러닝을 위한 keras 라이브러리

from keras.layers import Input, Activation, Conv2D, Flatten, Dense, MaxPooling2D,\

Dropout, Add, LeakyReLU, UpSampling2D

from keras.models import Model, load_model

from keras.callbacks import ReduceLROnPlateau

Using TensorFlow backend.


Load Dataset

In [2]:

# .npy 파일을 load 한다.

x_train = np.load('dataset/x_train.npy') # CT 이미지

y_train = np.load('dataset/y_train.npy') # 폐 영역 마스크 이미지

x_val = np.load('dataset/x_val.npy') # validation 이미지 로드

y_val = np.load('dataset/y_val.npy')

 

print(x_train.shape, y_train.shape)

# (x)(240, 256, 256, 1) : 트레이닝 셋 : 240개, 인풋이미지 : 256x256 사이즈, 1채널(그레이 스케일)

# (y)(240, 256, 256, 1) : 아웃풋도 동일하다.

print(x_val.shape, y_val.shape)

# (27, 256, 256, 1) (27, 256, 256, 1) : 밸리데이션 셋 : 27개, 인풋 / 아웃풋 사이즈와 색상 동일

 

(240, 256, 256, 1) (240, 256, 256, 1) (27, 256, 256, 1) (27, 256, 256, 1)

Build Model

In [3]:

inputs = Input(shape=(256,256,1)) # 인풋은 256x256 사이즈, 1채널(그레이 스케일)

 

# 차원의 크기를 줄인다.

net = Conv2D(32, kernel_size=3, activation='relu', padding='same')(inputs)

# 컨볼루션 2D 레이어가 들어간다.

net = MaxPooling2D(pool_size=2, padding='same')(net)

# 차원을 2차원으로 줄인다.(pool_size)

 

net = Conv2D(64, kernel_size=3, activation='relu', padding='same')(net)

net = MaxPooling2D(pool_size=2, padding='same')(net)

 

net = Conv2D(128, kernel_size=3, activation='relu', padding='same')(net)

net = MaxPooling2D(pool_size=2, padding='same')(net)

 

net = Dense(128, activation='relu')(net) # 학습능력 향상을 위해 Dense 레이어 추가

 

# 차원의 크기를 다시 늘린다.

net = UpSampling2D(size=2)(net) # UpSampling2D 를 2배(size=2)씩 해준다.

net = Conv2D(128, kernel_size=3, activation='sigmoid', padding='same')(net) # 컨볼루션 해주고

 

net = UpSampling2D(size=2)(net)

net = Conv2D(64, kernel_size=3, activation='sigmoid', padding='same')(net)

 

net = UpSampling2D(size=2)(net)

outputs = Conv2D(1, kernel_size=3, activation='sigmoid', padding='same')(net) # 아웃풋이 1채널로 반환

 

model = Model(inputs=inputs, outputs=outputs) # 인풋, 아웃풋을 정해준다.

 

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc', 'mse'])

# 옵티마이저 : adam(잘 모를때는 아담이 가장 적절하다.)

# loss='binary_crossentropy' : 0 or 1 을 판단해준다.

 

model.summary()

# 256, 256 -> 128, 128 -> 64, 64 -> 32, 32 까지 줄어듦을 확인할 수 있다. (인풋)

# 32, 32 -> 64, 64 -> 128, 128 -> 256, 256 으로 늘어난 것을 확인할 수 있다. (아웃풋)

# 인풋과 아웃풋의 크기를 동일하게 설정한다. 이를 통해 컨볼루셔널 인코더/디코더 완성

 

Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 256, 256, 1) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 256, 256, 32) 320 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 128, 128, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 128, 128, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 64, 64, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 32, 32, 128) 0 _________________________________________________________________ dense_1 (Dense) (None, 32, 32, 128) 16512 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 64, 64, 128) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ up_sampling2d_2 (UpSampling2 (None, 128, 128, 128) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 128, 128, 64) 73792 _________________________________________________________________ up_sampling2d_3 (UpSampling2 (None, 256, 256, 64) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 256, 256, 1) 577 ================================================================= Total params: 331,137 Trainable params: 331,137 Non-trainable params: 0 _________________________________________________________________

Train

In [4]:

history = model.fit(x_train, y_train, validation_data=(x_val, y_val), #fit함수로 트레이닝을 시킨다.

epochs=100, batch_size=32, callbacks=[ #epochs = 100번의 반복학습, batch_size=32

ReduceLROnPlateau(moniter='val_loss', factor=0.2, patience=10, verbose=1, mode='auto', min_lr=1e-05)

])

 

Train on 240 samples, validate on 27 samples Epoch 1/100 240/240 [==============================] - 94s 392ms/step - loss: 0.5738 - acc: 0.7437 - mse: 0.1891 - val_loss: 0.5228 - val_acc: 0.7448 - val_mse: 0.1727 Epoch 2/100 240/240 [==============================] - 95s 396ms/step - loss: 0.4908 - acc: 0.7624 - mse: 0.1611 - val_loss: 0.4697 - val_acc: 0.7467 - val_mse: 0.1568 Epoch 3/100 240/240 [==============================] - 93s 387ms/step - loss: 0.4435 - acc: 0.7627 - mse: 0.1461 - val_loss: 0.4985 - val_acc: 0.7467 - val_mse: 0.1719 Epoch 4/100 240/240 [==============================] - 93s 388ms/step - loss: 0.4534 - acc: 0.7621 - mse: 0.1518 - val_loss: 0.4293 - val_acc: 0.7467 - val_mse: 0.1420 Epoch 5/100 240/240 [==============================] - 93s 386ms/step - loss: 0.4426 - acc: 0.7626 - mse: 0.1472 - val_loss: 0.4361 - val_acc: 0.7465 - val_mse: 0.1440 Epoch 6/100 240/240 [==============================] - 93s 389ms/step - loss: 0.4281 - acc: 0.7627 - mse: 0.1419 - val_loss: 0.4580 - val_acc: 0.7467 - val_mse: 0.1585 Epoch 7/100 240/240 [==============================] - 96s 401ms/step - loss: 0.4133 - acc: 0.7694 - mse: 0.1377 - val_loss: 0.4009 - val_acc: 0.7469 - val_mse: 0.1341 Epoch 8/100 240/240 [==============================] - 97s 405ms/step - loss: 0.3973 - acc: 0.7672 - mse: 0.1325 - val_loss: 0.3716 - val_acc: 0.7753 - val_mse: 0.1244 Epoch 9/100 240/240 [==============================] - 96s 399ms/step - loss: 0.3660 - acc: 0.7752 - mse: 0.1238 - val_loss: 0.3722 - val_acc: 0.7503 - val_mse: 0.1262 Epoch 10/100 240/240 [==============================] - 99s 413ms/step - loss: 0.3177 - acc: 0.8249 - mse: 0.1038 - val_loss: 0.3324 - val_acc: 0.8302 - val_mse: 0.1081 Epoch 11/100 240/240 [==============================] - 97s 405ms/step - loss: 0.3473 - acc: 0.8112 - mse: 0.1149 - val_loss: 0.3674 - val_acc: 0.7697 - val_mse: 0.1247 Epoch 12/100 240/240 [==============================] - 100s 415ms/step - loss: 0.3279 - acc: 0.8116 - mse: 0.1080 - val_loss: 0.3336 - val_acc: 0.8102 - val_mse: 0.1092 Epoch 13/100 240/240 [==============================] - 97s 403ms/step - loss: 0.3130 - acc: 0.8269 - mse: 0.1040 - val_loss: 0.3298 - val_acc: 0.8232 - val_mse: 0.1095 Epoch 14/100 240/240 [==============================] - 94s 393ms/step - loss: 0.3532 - acc: 0.8028 - mse: 0.1193 - val_loss: 0.3977 - val_acc: 0.7468 - val_mse: 0.1390 Epoch 15/100 240/240 [==============================] - 94s 391ms/step - loss: 0.3315 - acc: 0.8060 - mse: 0.1127 - val_loss: 0.3039 - val_acc: 0.8484 - val_mse: 0.0955 Epoch 16/100 240/240 [==============================] - 96s 399ms/step - loss: 0.2714 - acc: 0.8751 - mse: 0.0852 - val_loss: 0.2894 - val_acc: 0.8509 - val_mse: 0.0907 Epoch 17/100 240/240 [==============================] - 100s 418ms/step - loss: 0.2093 - acc: 0.9078 - mse: 0.0629 - val_loss: 0.1794 - val_acc: 0.9371 - val_mse: 0.0456 Epoch 18/100 240/240 [==============================] - 98s 410ms/step - loss: 0.2544 - acc: 0.9000 - mse: 0.0744 - val_loss: 0.4003 - val_acc: 0.8223 - val_mse: 0.1305 Epoch 19/100 240/240 [==============================] - 95s 397ms/step - loss: 0.2823 - acc: 0.8739 - mse: 0.0863 - val_loss: 0.2344 - val_acc: 0.9370 - val_mse: 0.0671 Epoch 20/100 240/240 [==============================] - 95s 396ms/step - loss: 0.2076 - acc: 0.9123 - mse: 0.0625 - val_loss: 0.1932 - val_acc: 0.9397 - val_mse: 0.0486 Epoch 21/100 240/240 [==============================] - 94s 393ms/step - loss: 0.1404 - acc: 0.9535 - mse: 0.0353 - val_loss: 0.1659 - val_acc: 0.9456 - val_mse: 0.0401 Epoch 22/100 240/240 [==============================] - 95s 397ms/step - loss: 0.1046 - acc: 0.9621 - mse: 0.0257 - val_loss: 0.1643 - val_acc: 0.9489 - val_mse: 0.0375 Epoch 23/100 240/240 [==============================] - 100s 415ms/step - loss: 0.0958 - acc: 0.9637 - mse: 0.0241 - val_loss: 0.1574 - val_acc: 0.9508 - val_mse: 0.0354 Epoch 24/100 240/240 [==============================] - 95s 398ms/step - loss: 0.0863 - acc: 0.9664 - mse: 0.0216 - val_loss: 0.1433 - val_acc: 0.9548 - val_mse: 0.0323 Epoch 25/100 240/240 [==============================] - 94s 391ms/step - loss: 0.0803 - acc: 0.9684 - mse: 0.0200 - val_loss: 0.1443 - val_acc: 0.9519 - val_mse: 0.0343 Epoch 26/100 240/240 [==============================] - 101s 421ms/step - loss: 0.0888 - acc: 0.9648 - mse: 0.0228 - val_loss: 0.1534 - val_acc: 0.9505 - val_mse: 0.0343 Epoch 27/100 240/240 [==============================] - 99s 411ms/step - loss: 0.0776 - acc: 0.9695 - mse: 0.0193 - val_loss: 0.1382 - val_acc: 0.9586 - val_mse: 0.0294 Epoch 28/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0776 - acc: 0.9697 - mse: 0.0193 - val_loss: 0.1508 - val_acc: 0.9483 - val_mse: 0.0372 Epoch 29/100 240/240 [==============================] - 95s 396ms/step - loss: 0.0733 - acc: 0.9712 - mse: 0.0181 - val_loss: 0.1398 - val_acc: 0.9590 - val_mse: 0.0294 Epoch 30/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0685 - acc: 0.9730 - mse: 0.0166 - val_loss: 0.1364 - val_acc: 0.9556 - val_mse: 0.0315 Epoch 31/100 240/240 [==============================] - 95s 397ms/step - loss: 0.0672 - acc: 0.9735 - mse: 0.0163 - val_loss: 0.1339 - val_acc: 0.9622 - val_mse: 0.0272 Epoch 32/100 240/240 [==============================] - 94s 393ms/step - loss: 0.0627 - acc: 0.9752 - mse: 0.0150 - val_loss: 0.1421 - val_acc: 0.9601 - val_mse: 0.0288 Epoch 33/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0611 - acc: 0.9762 - mse: 0.0144 - val_loss: 0.1330 - val_acc: 0.9632 - val_mse: 0.0264 Epoch 34/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0657 - acc: 0.9745 - mse: 0.0157 - val_loss: 0.1720 - val_acc: 0.9484 - val_mse: 0.0369 Epoch 35/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0708 - acc: 0.9727 - mse: 0.0170 - val_loss: 0.1330 - val_acc: 0.9610 - val_mse: 0.0277 Epoch 36/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0634 - acc: 0.9749 - mse: 0.0152 - val_loss: 0.1309 - val_acc: 0.9591 - val_mse: 0.0291 Epoch 37/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0647 - acc: 0.9743 - mse: 0.0157 - val_loss: 0.1326 - val_acc: 0.9639 - val_mse: 0.0260 Epoch 38/100 240/240 [==============================] - 95s 397ms/step - loss: 0.0589 - acc: 0.9770 - mse: 0.0138 - val_loss: 0.1366 - val_acc: 0.9569 - val_mse: 0.0313 Epoch 39/100 240/240 [==============================] - 102s 424ms/step - loss: 0.0581 - acc: 0.9772 - mse: 0.0136 - val_loss: 0.1253 - val_acc: 0.9652 - val_mse: 0.0249 Epoch 40/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0548 - acc: 0.9785 - mse: 0.0126 - val_loss: 0.1298 - val_acc: 0.9638 - val_mse: 0.0261 Epoch 41/100 240/240 [==============================] - 95s 396ms/step - loss: 0.0536 - acc: 0.9790 - mse: 0.0123 - val_loss: 0.1248 - val_acc: 0.9640 - val_mse: 0.0259 Epoch 42/100 240/240 [==============================] - 97s 405ms/step - loss: 0.0541 - acc: 0.9784 - mse: 0.0126 - val_loss: 0.1262 - val_acc: 0.9653 - val_mse: 0.0249 Epoch 43/100 240/240 [==============================] - 98s 407ms/step - loss: 0.0552 - acc: 0.9783 - mse: 0.0128 - val_loss: 0.1249 - val_acc: 0.9657 - val_mse: 0.0249 Epoch 44/100 240/240 [==============================] - 95s 397ms/step - loss: 0.0641 - acc: 0.9743 - mse: 0.0157 - val_loss: 0.1685 - val_acc: 0.9521 - val_mse: 0.0347 Epoch 45/100 240/240 [==============================] - 98s 409ms/step - loss: 0.0625 - acc: 0.9753 - mse: 0.0150 - val_loss: 0.1215 - val_acc: 0.9642 - val_mse: 0.0256 Epoch 46/100 240/240 [==============================] - 95s 398ms/step - loss: 0.0561 - acc: 0.9782 - mse: 0.0129 - val_loss: 0.1247 - val_acc: 0.9635 - val_mse: 0.0258 Epoch 47/100 240/240 [==============================] - 96s 400ms/step - loss: 0.0556 - acc: 0.9777 - mse: 0.0131 - val_loss: 0.1259 - val_acc: 0.9665 - val_mse: 0.0240 Epoch 48/100 240/240 [==============================] - 97s 403ms/step - loss: 0.0506 - acc: 0.9802 - mse: 0.0114 - val_loss: 0.1219 - val_acc: 0.9656 - val_mse: 0.0248 Epoch 49/100 240/240 [==============================] - 95s 397ms/step - loss: 0.0489 - acc: 0.9809 - mse: 0.0109 - val_loss: 0.1264 - val_acc: 0.9665 - val_mse: 0.0240 Epoch 50/100 240/240 [==============================] - 102s 424ms/step - loss: 0.0509 - acc: 0.9800 - mse: 0.0115 - val_loss: 0.1232 - val_acc: 0.9673 - val_mse: 0.0238 Epoch 51/100 240/240 [==============================] - 99s 414ms/step - loss: 0.0483 - acc: 0.9810 - mse: 0.0108 - val_loss: 0.1256 - val_acc: 0.9676 - val_mse: 0.0234 Epoch 52/100 240/240 [==============================] - 100s 415ms/step - loss: 0.0553 - acc: 0.9781 - mse: 0.0129 - val_loss: 0.1345 - val_acc: 0.9655 - val_mse: 0.0246 Epoch 53/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0513 - acc: 0.9798 - mse: 0.0117 - val_loss: 0.1216 - val_acc: 0.9674 - val_mse: 0.0235 Epoch 54/100 240/240 [==============================] - 96s 402ms/step - loss: 0.0539 - acc: 0.9787 - mse: 0.0125 - val_loss: 0.1332 - val_acc: 0.9644 - val_mse: 0.0253 Epoch 55/100 240/240 [==============================] - 99s 411ms/step - loss: 0.0581 - acc: 0.9767 - mse: 0.0138 - val_loss: 0.1195 - val_acc: 0.9670 - val_mse: 0.0236 Epoch 56/100 240/240 [==============================] - 94s 390ms/step - loss: 0.0492 - acc: 0.9805 - mse: 0.0111 - val_loss: 0.1177 - val_acc: 0.9674 - val_mse: 0.0233 Epoch 57/100 240/240 [==============================] - 96s 400ms/step - loss: 0.0455 - acc: 0.9820 - mse: 0.0101 - val_loss: 0.1224 - val_acc: 0.9686 - val_mse: 0.0228 Epoch 58/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0444 - acc: 0.9822 - mse: 0.0098 - val_loss: 0.1239 - val_acc: 0.9685 - val_mse: 0.0226 Epoch 59/100 240/240 [==============================] - 95s 396ms/step - loss: 0.0455 - acc: 0.9816 - mse: 0.0101 - val_loss: 0.1259 - val_acc: 0.9690 - val_mse: 0.0224 Epoch 60/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0434 - acc: 0.9827 - mse: 0.0094 - val_loss: 0.1250 - val_acc: 0.9685 - val_mse: 0.0229 Epoch 61/100 240/240 [==============================] - 93s 389ms/step - loss: 0.0450 - acc: 0.9822 - mse: 0.0098 - val_loss: 0.1295 - val_acc: 0.9682 - val_mse: 0.0228 Epoch 62/100 240/240 [==============================] - 93s 388ms/step - loss: 0.0429 - acc: 0.9828 - mse: 0.0093 - val_loss: 0.1233 - val_acc: 0.9696 - val_mse: 0.0219 Epoch 63/100 240/240 [==============================] - 94s 391ms/step - loss: 0.0414 - acc: 0.9835 - mse: 0.0088 - val_loss: 0.1198 - val_acc: 0.9683 - val_mse: 0.0229 Epoch 64/100 240/240 [==============================] - 100s 418ms/step - loss: 0.0417 - acc: 0.9834 - mse: 0.0089 - val_loss: 0.2178 - val_acc: 0.9308 - val_mse: 0.0507 Epoch 65/100 240/240 [==============================] - 106s 441ms/step - loss: 0.1312 - acc: 0.9498 - mse: 0.0344 - val_loss: 0.1440 - val_acc: 0.9584 - val_mse: 0.0308 Epoch 66/100 240/240 [==============================] - 100s 415ms/step - loss: 0.0874 - acc: 0.9669 - mse: 0.0221 - val_loss: 0.1355 - val_acc: 0.9549 - val_mse: 0.0322 Epoch 00066: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. Epoch 67/100 240/240 [==============================] - 94s 393ms/step - loss: 0.0676 - acc: 0.9736 - mse: 0.0165 - val_loss: 0.1319 - val_acc: 0.9629 - val_mse: 0.0265 Epoch 68/100 240/240 [==============================] - 95s 396ms/step - loss: 0.0573 - acc: 0.9774 - mse: 0.0134 - val_loss: 0.1222 - val_acc: 0.9622 - val_mse: 0.0265 Epoch 69/100 240/240 [==============================] - 96s 399ms/step - loss: 0.0540 - acc: 0.9787 - mse: 0.0125 - val_loss: 0.1269 - val_acc: 0.9659 - val_mse: 0.0245 Epoch 70/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0531 - acc: 0.9793 - mse: 0.0121 - val_loss: 0.1201 - val_acc: 0.9652 - val_mse: 0.0246 Epoch 71/100 240/240 [==============================] - 95s 398ms/step - loss: 0.0514 - acc: 0.9795 - mse: 0.0118 - val_loss: 0.1229 - val_acc: 0.9672 - val_mse: 0.0234 Epoch 72/100 240/240 [==============================] - 101s 420ms/step - loss: 0.0503 - acc: 0.9802 - mse: 0.0114 - val_loss: 0.1197 - val_acc: 0.9664 - val_mse: 0.0238 Epoch 73/100 240/240 [==============================] - 96s 402ms/step - loss: 0.0497 - acc: 0.9804 - mse: 0.0112 - val_loss: 0.1203 - val_acc: 0.9675 - val_mse: 0.0232 Epoch 74/100 240/240 [==============================] - 96s 401ms/step - loss: 0.0486 - acc: 0.9808 - mse: 0.0109 - val_loss: 0.1203 - val_acc: 0.9680 - val_mse: 0.0229 Epoch 75/100 240/240 [==============================] - 94s 391ms/step - loss: 0.0478 - acc: 0.9811 - mse: 0.0107 - val_loss: 0.1202 - val_acc: 0.9684 - val_mse: 0.0226 Epoch 76/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0473 - acc: 0.9813 - mse: 0.0105 - val_loss: 0.1191 - val_acc: 0.9674 - val_mse: 0.0232 Epoch 00076: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. Epoch 77/100 240/240 [==============================] - 93s 389ms/step - loss: 0.0476 - acc: 0.9811 - mse: 0.0106 - val_loss: 0.1195 - val_acc: 0.9681 - val_mse: 0.0228 Epoch 78/100 240/240 [==============================] - 95s 395ms/step - loss: 0.0466 - acc: 0.9815 - mse: 0.0103 - val_loss: 0.1206 - val_acc: 0.9686 - val_mse: 0.0224 Epoch 79/100 240/240 [==============================] - 98s 407ms/step - loss: 0.0465 - acc: 0.9816 - mse: 0.0103 - val_loss: 0.1203 - val_acc: 0.9686 - val_mse: 0.0224 Epoch 80/100 240/240 [==============================] - 94s 393ms/step - loss: 0.0463 - acc: 0.9816 - mse: 0.0102 - val_loss: 0.1200 - val_acc: 0.9685 - val_mse: 0.0225 Epoch 81/100 240/240 [==============================] - 97s 404ms/step - loss: 0.0462 - acc: 0.9816 - mse: 0.0102 - val_loss: 0.1200 - val_acc: 0.9685 - val_mse: 0.0225 Epoch 82/100 240/240 [==============================] - 97s 403ms/step - loss: 0.0461 - acc: 0.9817 - mse: 0.0102 - val_loss: 0.1208 - val_acc: 0.9687 - val_mse: 0.0224 Epoch 83/100 240/240 [==============================] - 95s 398ms/step - loss: 0.0461 - acc: 0.9817 - mse: 0.0102 - val_loss: 0.1202 - val_acc: 0.9686 - val_mse: 0.0225 Epoch 84/100 240/240 [==============================] - 98s 408ms/step - loss: 0.0461 - acc: 0.9817 - mse: 0.0102 - val_loss: 0.1202 - val_acc: 0.9686 - val_mse: 0.0225 Epoch 85/100 240/240 [==============================] - 97s 403ms/step - loss: 0.0460 - acc: 0.9817 - mse: 0.0102 - val_loss: 0.1211 - val_acc: 0.9688 - val_mse: 0.0223 Epoch 86/100 240/240 [==============================] - 95s 397ms/step - loss: 0.0458 - acc: 0.9818 - mse: 0.0101 - val_loss: 0.1203 - val_acc: 0.9687 - val_mse: 0.0225 Epoch 00086: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 87/100 240/240 [==============================] - 94s 393ms/step - loss: 0.0457 - acc: 0.9818 - mse: 0.0101 - val_loss: 0.1203 - val_acc: 0.9686 - val_mse: 0.0225 Epoch 88/100 240/240 [==============================] - 94s 391ms/step - loss: 0.0457 - acc: 0.9818 - mse: 0.0101 - val_loss: 0.1205 - val_acc: 0.9687 - val_mse: 0.0224 Epoch 89/100 240/240 [==============================] - 97s 405ms/step - loss: 0.0456 - acc: 0.9819 - mse: 0.0101 - val_loss: 0.1206 - val_acc: 0.9687 - val_mse: 0.0224 Epoch 90/100 240/240 [==============================] - 101s 422ms/step - loss: 0.0456 - acc: 0.9819 - mse: 0.0101 - val_loss: 0.1207 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 91/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0456 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1207 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 92/100 240/240 [==============================] - 92s 383ms/step - loss: 0.0456 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1207 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 93/100 240/240 [==============================] - 92s 384ms/step - loss: 0.0455 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1205 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 94/100 240/240 [==============================] - 94s 393ms/step - loss: 0.0456 - acc: 0.9818 - mse: 0.0101 - val_loss: 0.1204 - val_acc: 0.9687 - val_mse: 0.0224 Epoch 95/100 240/240 [==============================] - 94s 392ms/step - loss: 0.0455 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1206 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 96/100 240/240 [==============================] - 94s 390ms/step - loss: 0.0455 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1207 - val_acc: 0.9688 - val_mse: 0.0223 Epoch 97/100 240/240 [==============================] - 95s 394ms/step - loss: 0.0455 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1206 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 98/100 240/240 [==============================] - 94s 391ms/step - loss: 0.0454 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1206 - val_acc: 0.9688 - val_mse: 0.0224 Epoch 99/100 240/240 [==============================] - 95s 397ms/step - loss: 0.0454 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1207 - val_acc: 0.9688 - val_mse: 0.0223 Epoch 100/100 240/240 [==============================] - 96s 400ms/step - loss: 0.0454 - acc: 0.9819 - mse: 0.0100 - val_loss: 0.1206 - val_acc: 0.9688 - val_mse: 0.0223

Evaluation

In [5]:

 

fig, ax = plt.subplots(2,2,figsize=(10,7))

 

ax[0,0].set_title('loss')

ax[0,0].plot(history.history['loss'], 'r') # 그래프를 그려준다. 히스토리의 히스토리 오브젝트에서 loss에도 접근 가능하고

ax[0,1].set_title('acc')

ax[0,1].plot(history.history['acc'], 'b') # accuracy 에도 접근 가능

 

ax[1,0].set_title('val_loss')

ax[1,0].plot(history.history['val_loss'],'r--') # validation loss에도 접근 가능

ax[1,1].set_title('val_acc')

ax[1,1].plot(history.history['val_acc'], 'b--') # validation accuracy에도 접근 가능 --> 모두 접근하여 그래프를 그린다.

# 1행이 트레이닝 결과

# 2행이 밸리데이션 결과

 

Out[5]:

[<matplotlib.lines.Line2D at 0x1cd9fc99448>]

 

In [6]:

 

preds = model.predict(x_val) # predict 함수를 통해 (x_val) 검증한다

 

fig, ax = plt.subplots(len(x_val), 3, figsize=(10, 100))

 

for i, pred in enumerate(preds):

ax[i, 0].imshow(x_val[i].squeeze(), cmap='gray') # 첫 번째 열에는 CT 데이터(x_val[i])를 삽입

ax[i, 1].imshow(y_val[i].squeeze(), cmap='gray') # 두 번째 열에는 정답 데이터를 넣는다.

ax[i, 2].imshow(pred.squeeze(), cmap='gray') # 세 번째 열에는 우리가 예측한 결과값을 그려본다.

 

# 이러한 딥러닝을 활용하여 산소의 수치, 적혈구, 백혈구의 농도 등 수치적 데이터를 가지고 예측하는 방법도 존재함.

 

 

 

 

 

 

 

 

 

728x90
반응형

댓글