āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ⧇āϰ āĻ­ā§‚āĻŽāĻŋāĻ•āĻž

TensorFlow.org āĻ āĻĻ⧇āϖ⧁āύ Google Colab-āĻ āϚāĻžāϞāĻžāύ GitHub-āĻ āĻ‰ā§ŽāϏ āĻĻ⧇āϖ⧁āύ āύ⧋āϟāĻŦ⧁āĻ• āĻĄāĻžāωāύāϞ⧋āĻĄ āĻ•āϰ⧁āύ

āĻāχ āϟāĻŋāωāĻŸā§‹āϰāĻŋāϝāĻŧāĻžāϞāϟāĻŋ āϤāĻŋāύāϟāĻŋ āωāĻĻāĻžāĻšāϰāĻŖ āϏāĻš āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ⧇āϰ āϏāĻžāĻĨ⧇ āĻĒāϰāĻŋāϚāϝāĻŧ āĻ•āϰāĻŋāϝāĻŧ⧇ āĻĻ⧇āϝāĻŧ: āĻŦ⧇āϏāĻŋāĻ•, āχāĻŽā§‡āϜ āĻĄāĻŋāύ⧋āχāϏāĻŋāĻ‚ āĻāĻŦāĻ‚ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āϏāύāĻžāĻ•ā§āϤāĻ•āϰāĻŖāĨ¤

āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻšāϞ āĻāĻ•āϟāĻŋ āĻŦāĻŋāĻļ⧇āώ āϧāϰāύ⧇āϰ āύāĻŋāωāϰāĻžāϞ āύ⧇āϟāĻ“āϝāĻŧāĻžāĻ°ā§āĻ• āϝāĻž āϤāĻžāϰ āχāύāĻĒ⧁āϟāϕ⧇ āϤāĻžāϰ āφāωāϟāĻĒ⧁āĻŸā§‡ āĻ…āύ⧁āϞāĻŋāĻĒāĻŋ āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŋāϤāĨ¤ āωāĻĻāĻžāĻšāϰāĻŖāĻ¸ā§āĻŦāϰ⧂āĻĒ, āĻāĻ•āϟāĻŋ āĻšāĻ¸ā§āϤāϞāĻŋāĻ–āĻŋāϤ āĻ…āĻ™ā§āϕ⧇āϰ āĻāĻ•āϟāĻŋ āϚāĻŋāĻ¤ā§āϰ āĻĻ⧇āĻ“āϝāĻŧāĻž āĻšāϞ⧇, āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĒā§āϰāĻĨāĻŽā§‡ āϚāĻŋāĻ¤ā§āϰāϟāĻŋāϕ⧇ āĻāĻ•āϟāĻŋ āύāĻŋāĻŽā§āύ āĻŽāĻžāĻ¤ā§āϰāĻŋāĻ• āϏ⧁āĻĒā§āϤ āĻĒā§āϰāϤāĻŋāύāĻŋāϧāĻŋāĻ¤ā§āĻŦ⧇ āĻāύāϕ⧋āĻĄ āĻ•āϰ⧇, āϤāĻžāϰāĻĒāϰ⧇ āϏ⧁āĻĒā§āϤ āωāĻĒāĻ¸ā§āĻĨāĻžāĻĒāύāĻžāϟāĻŋāϕ⧇ āĻāĻ•āϟāĻŋ āϚāĻŋāĻ¤ā§āϰ⧇ āĻĄāĻŋāϕ⧋āĻĄ āĻ•āϰ⧇āĨ¤ āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ¤ā§āϰ⧁āϟāĻŋ āĻ•āĻŽāĻŋāϝāĻŧ⧇ āĻĄā§‡āϟāĻž āϏāĻ‚āϕ⧁āϚāĻŋāϤ āĻ•āϰāϤ⧇ āĻļ⧇āϖ⧇āĨ¤

āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āϏāĻŽā§āĻĒāĻ°ā§āϕ⧇ āφāϰāĻ“ āϜāĻžāύāϤ⧇, āĻ…āύ⧁āĻ—ā§āϰāĻš āĻ•āϰ⧇ āχāϝāĻŧāĻžāύ āϗ⧁āĻĄāĻĢ⧇āϞ⧋, āχāϝāĻŧā§‹āĻļ⧁āϝāĻŧāĻž āĻŦ⧇āĻ™ā§āĻ—āĻŋāĻ“ āĻāĻŦāĻ‚ āĻ…ā§āϝāĻžāϰāύ āϕ⧋āϰāĻ­āĻŋāϞ⧇āϰ āĻĄāĻŋāĻĒ āϞāĻžāĻ°ā§āύāĻŋāĻ‚ āĻĨ⧇āϕ⧇ āĻ…āĻ§ā§āϝāĻžāϝāĻŧ 14 āĻĒāĻĄāĻŧāĻžāϰ āĻ•āĻĨāĻž āĻŦāĻŋāĻŦ⧇āϚāύāĻž āĻ•āϰ⧁āύāĨ¤

TensorFlow āĻāĻŦāĻ‚ āĻ…āĻ¨ā§āϝāĻžāĻ¨ā§āϝ āϞāĻžāχāĻŦā§āϰ⧇āϰāĻŋ āφāĻŽāĻĻāĻžāύāĻŋ āĻ•āϰ⧁āύ

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf

from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model

āĻĄā§‡āϟāĻžāϏ⧇āϟ āϞ⧋āĻĄ āĻ•āϰ⧁āύ

āĻļ⧁āϰ⧁ āĻ•āϰāϤ⧇, āφāĻĒāύāĻŋ āĻĢā§āϝāĻžāĻļāύ MNIST āĻĄā§‡āϟāĻžāϏ⧇āϟ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻŽā§ŒāϞāĻŋāĻ• āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻŦ⧇āύāĨ¤ āĻāχ āĻĄā§‡āϟāĻžāϏ⧇āĻŸā§‡āϰ āĻĒā§āϰāϤāĻŋāϟāĻŋ āϚāĻŋāĻ¤ā§āϰ 28x28 āĻĒāĻŋāĻ•ā§āϏ⧇āϞāĨ¤

(x_train, _), (x_test, _) = fashion_mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

print (x_train.shape)
print (x_test.shape)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
40960/29515 [=========================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
26435584/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
16384/5148 [===============================================================================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
4431872/4422102 [==============================] - 0s 0us/step
(60000, 28, 28)
(10000, 28, 28)

āĻĒā§āϰāĻĨāĻŽ āωāĻĻāĻžāĻšāϰāĻŖ: āĻŽā§ŒāϞāĻŋāĻ• āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ

āĻŽā§ŒāϞāĻŋāĻ• āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĢāϞāĻžāĻĢāϞ

āĻĻ⧁āϟāĻŋ āϘāύ āĻ¸ā§āϤāϰ āϏāĻš āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰ⧁āύ: āĻāĻ•āϟāĻŋ encoder , āϝāĻž āϚāĻŋāĻ¤ā§āϰāϗ⧁āϞāĻŋāϕ⧇ āĻāĻ•āϟāĻŋ 64 āĻŽāĻžāĻ¤ā§āϰāĻŋāĻ• āϏ⧁āĻĒā§āϤ āϭ⧇āĻ•ā§āϟāϰ⧇ āϏāĻ‚āϕ⧁āϚāĻŋāϤ āĻ•āϰ⧇ āĻāĻŦāĻ‚ āĻāĻ•āϟāĻŋ decoder , āϝāĻž āϏ⧁āĻĒā§āϤ āĻ¸ā§āĻĨāĻžāύ āĻĨ⧇āϕ⧇ āφāϏāϞ āϚāĻŋāĻ¤ā§āϰāϟāĻŋāϕ⧇ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ•āϰ⧇⧎

āφāĻĒāύāĻžāϰ āĻŽāĻĄā§‡āϞ āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰāϤ⧇, āϕ⧇āϰāĻžāϏ āĻŽāĻĄā§‡āϞ āϏāĻžāĻŦāĻ•ā§āϞāĻžāϏāĻŋāĻ‚ API āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧁āύāĨ¤

latent_dim = 64 

class Autoencoder(Model):
  def __init__(self, latent_dim):
    super(Autoencoder, self).__init__()
    self.latent_dim = latent_dim   
    self.encoder = tf.keras.Sequential([
      layers.Flatten(),
      layers.Dense(latent_dim, activation='relu'),
    ])
    self.decoder = tf.keras.Sequential([
      layers.Dense(784, activation='sigmoid'),
      layers.Reshape((28, 28))
    ])

  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)
    return decoded

autoencoder = Autoencoder(latent_dim)
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())

āχāύāĻĒ⧁āϟ āĻāĻŦāĻ‚ āϞāĻ•ā§āĻˇā§āϝ āωāĻ­āϝāĻŧ āĻšāĻŋāϏāĻžāĻŦ⧇ x_train āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻŽāĻĄā§‡āϞāϕ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻāĻŋāύāĨ¤ encoder 784 āĻŽāĻžāĻ¤ā§āϰāĻž āĻĨ⧇āϕ⧇ āϏ⧁āĻĒā§āϤ āĻ¸ā§āĻĨāĻžāύ āĻĨ⧇āϕ⧇ āĻĄā§‡āϟāĻžāϏ⧇āϟāϕ⧇ āϏāĻ‚āϕ⧁āϚāĻŋāϤ āĻ•āϰāϤ⧇ āĻļāĻŋāĻ–āĻŦ⧇ āĻāĻŦāĻ‚ decoder āφāϏāϞ āϚāĻŋāĻ¤ā§āϰāϗ⧁āϞāĻŋāϕ⧇ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ•āϰāϤ⧇ āĻļāĻŋāĻ–āĻŦ⧇āĨ¤ .

autoencoder.fit(x_train, x_train,
                epochs=10,
                shuffle=True,
                validation_data=(x_test, x_test))
Epoch 1/10
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0243 - val_loss: 0.0140
Epoch 2/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0116 - val_loss: 0.0106
Epoch 3/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0100 - val_loss: 0.0098
Epoch 4/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0094 - val_loss: 0.0094
Epoch 5/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0092 - val_loss: 0.0092
Epoch 6/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0090 - val_loss: 0.0091
Epoch 7/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0090 - val_loss: 0.0090
Epoch 8/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0089 - val_loss: 0.0090
Epoch 9/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0088 - val_loss: 0.0089
Epoch 10/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0088 - val_loss: 0.0089
<keras.callbacks.History at 0x7ff1d35df550>

āĻāĻ–āύ āϝ⧇āĻšā§‡āϤ⧁ āĻŽāĻĄā§‡āϞāϟāĻŋ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŋāϤ āĻšāϝāĻŧ⧇āϛ⧇, āφāϏ⧁āύ āĻĒāϰ⧀āĻ•ā§āώāĻž āϏ⧇āϟ āĻĨ⧇āϕ⧇ āĻ›āĻŦāĻŋāϗ⧁āϞāĻŋāϕ⧇ āĻāύāϕ⧋āĻĄāĻŋāĻ‚ āĻāĻŦāĻ‚ āĻĄāĻŋāϕ⧋āĻĄāĻŋāĻ‚ āĻ•āϰ⧇ āĻĒāϰ⧀āĻ•ā§āώāĻž āĻ•āϰāĻŋ⧎

encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
  # display original
  ax = plt.subplot(2, n, i + 1)
  plt.imshow(x_test[i])
  plt.title("original")
  plt.gray()
  ax.get_xaxis().set_visible(False)
  ax.get_yaxis().set_visible(False)

  # display reconstruction
  ax = plt.subplot(2, n, i + 1 + n)
  plt.imshow(decoded_imgs[i])
  plt.title("reconstructed")
  plt.gray()
  ax.get_xaxis().set_visible(False)
  ax.get_yaxis().set_visible(False)
plt.show()

png

āĻĻā§āĻŦāĻŋāϤ⧀āϝāĻŧ āωāĻĻāĻžāĻšāϰāĻŖ: āχāĻŽā§‡āϜ denoising

āχāĻŽā§‡āϜ denoising āĻĢāϞāĻžāĻĢāϞ

āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āĻ›āĻŦāĻŋ āĻĨ⧇āϕ⧇ āĻļāĻŦā§āĻĻ āĻ…āĻĒāϏāĻžāϰāĻŖ āĻ•āϰāϤ⧇āĻ“ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻ“āϝāĻŧāĻž āϝ⧇āϤ⧇ āĻĒāĻžāϰ⧇āĨ¤ āύāĻŋāĻŽā§āύāϞāĻŋāĻ–āĻŋāϤ āĻŦāĻŋāĻ­āĻžāϗ⧇, āφāĻĒāύāĻŋ āĻĒā§āϰāϤāĻŋāϟāĻŋ āĻ›āĻŦāĻŋāϤ⧇ āĻāϞ⧋āĻŽā§‡āϞ⧋ āĻļāĻŦā§āĻĻ āĻĒā§āϰāϝāĻŧā§‹āĻ— āĻ•āϰ⧇ āĻĢā§āϝāĻžāĻļāύ MNIST āĻĄā§‡āϟāĻžāϏ⧇āĻŸā§‡āϰ āĻāĻ•āϟāĻŋ āϕ⧋āϞāĻžāĻšāϞāĻĒā§‚āĻ°ā§āĻŖ āϏāĻ‚āĻ¸ā§āĻ•āϰāĻŖ āϤ⧈āϰāĻŋ āĻ•āϰāĻŦ⧇āύāĨ¤ āϤāĻžāϰāĻĒāϰ⧇ āφāĻĒāύāĻŋ āĻāĻ•āϟāĻŋ āĻ¸ā§āĻŦāϝāĻŧāĻ‚āĻ•ā§āϰāĻŋāϝāĻŧ āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āχāύāĻĒ⧁āϟ āĻšāĻŋāϏāĻžāĻŦ⧇ āĻļā§‹āϰāĻ—ā§‹āϞ āχāĻŽā§‡āϜ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻāĻŦāĻ‚ āφāϏāϞ āϚāĻŋāĻ¤ā§āϰāϟāĻŋāϕ⧇ āϞāĻ•ā§āĻˇā§āϝ āĻšāĻŋāϏāĻžāĻŦ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻŦ⧇āύāĨ¤

āφāϗ⧇ āĻ•āϰāĻž āĻĒāϰāĻŋāĻŦāĻ°ā§āϤāύāϗ⧁āϞāĻŋ āĻŦāĻžāĻĻ āĻĻāĻŋāϤ⧇ āĻĄā§‡āϟāĻžāϏ⧇āϟāϟāĻŋ āĻĒ⧁āύāϰāĻžāϝāĻŧ āφāĻŽāĻĻāĻžāύāĻŋ āĻ•āϰāĻž āϝāĻžāĻ•āĨ¤

(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]

print(x_train.shape)
(60000, 28, 28, 1)

āχāĻŽā§‡āϜ āĻāϞ⧋āĻŽā§‡āϞ⧋ āĻļāĻŦā§āĻĻ āϝ⧋āĻ— āĻ•āϰāĻž

noise_factor = 0.2
x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape) 
x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape) 

x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)
x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)

āĻļā§‹āϰāĻ—ā§‹āϞ āχāĻŽā§‡āϜ āĻĒā§āϞāϟ.

n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
    ax = plt.subplot(1, n, i + 1)
    plt.title("original + noise")
    plt.imshow(tf.squeeze(x_test_noisy[i]))
    plt.gray()
plt.show()

png

āĻāĻ•āϟāĻŋ āĻ•āύāĻ­ā§‹āϞāĻŋāωāĻļāύāĻžāϞ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰ⧁āύ

āĻāχ āωāĻĻāĻžāĻšāϰāϪ⧇, āφāĻĒāύāĻŋ encoder Conv2D āĻ¸ā§āϤāϰāϗ⧁āϞāĻŋ āĻāĻŦāĻ‚ decoder Conv2DT āĻŸā§āϰāĻžāĻ¨ā§āϏāĻĒā§‹āϜ āĻ¸ā§āϤāϰāϗ⧁āϞāĻŋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻāĻ•āϟāĻŋ āϰ⧂āĻĒāĻžāĻ¨ā§āϤāϰāĻŽā§‚āϞāĻ• āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻŦ⧇āύ⧎

class Denoise(Model):
  def __init__(self):
    super(Denoise, self).__init__()
    self.encoder = tf.keras.Sequential([
      layers.Input(shape=(28, 28, 1)),
      layers.Conv2D(16, (3, 3), activation='relu', padding='same', strides=2),
      layers.Conv2D(8, (3, 3), activation='relu', padding='same', strides=2)])

    self.decoder = tf.keras.Sequential([
      layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same')])

  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)
    return decoded

autoencoder = Denoise()
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
autoencoder.fit(x_train_noisy, x_train,
                epochs=10,
                shuffle=True,
                validation_data=(x_test_noisy, x_test))
Epoch 1/10
1875/1875 [==============================] - 8s 3ms/step - loss: 0.0169 - val_loss: 0.0107
Epoch 2/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0095 - val_loss: 0.0086
Epoch 3/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0082 - val_loss: 0.0080
Epoch 4/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0078 - val_loss: 0.0077
Epoch 5/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0076 - val_loss: 0.0075
Epoch 6/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0074 - val_loss: 0.0074
Epoch 7/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0073 - val_loss: 0.0073
Epoch 8/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0072 - val_loss: 0.0072
Epoch 9/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0071 - val_loss: 0.0071
Epoch 10/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0070 - val_loss: 0.0071
<keras.callbacks.History at 0x7ff1c45a31d0>

āφāϏ⧁āύ āĻāύāϕ⧋āĻĄāĻžāϰ⧇āϰ āĻāĻ•āϟāĻŋ āϏāĻžāϰāĻžāĻ‚āĻļ āĻĻ⧇āϖ⧇ āύ⧇āĻ“āϝāĻŧāĻž āϝāĻžāĻ•āĨ¤ āϞāĻ•ā§āĻˇā§āϝ āĻ•āϰ⧁āύ āĻ•āĻŋāĻ­āĻžāĻŦ⧇ 28x28 āĻĨ⧇āϕ⧇ 7x7 āĻĒāĻ°ā§āϝāĻ¨ā§āϤ āϚāĻŋāĻ¤ā§āϰāϗ⧁āϞāĻŋ āĻĄāĻžāωāύāĻ¸ā§āϝāĻžāĻŽā§āĻĒ āĻ•āϰāĻž āĻšāϝāĻŧ⧇āϛ⧇āĨ¤

autoencoder.encoder.summary()
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 14, 14, 16)        160       
                                                                 
 conv2d_1 (Conv2D)           (None, 7, 7, 8)           1160      
                                                                 
=================================================================
Total params: 1,320
Trainable params: 1,320
Non-trainable params: 0
_________________________________________________________________

āĻĄāĻŋāϕ⧋āĻĄāĻžāϰāϟāĻŋ 7x7 āĻĨ⧇āϕ⧇ 28x28 āĻĒāĻ°ā§āϝāĻ¨ā§āϤ āϚāĻŋāĻ¤ā§āϰāϗ⧁āϞāĻŋāϰ āύāĻŽā§āύāĻž āĻĻ⧇āϝāĻŧ⧎

autoencoder.decoder.summary()
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_transpose (Conv2DTra  (None, 14, 14, 8)        584       
 nspose)                                                         
                                                                 
 conv2d_transpose_1 (Conv2DT  (None, 28, 28, 16)       1168      
 ranspose)                                                       
                                                                 
 conv2d_2 (Conv2D)           (None, 28, 28, 1)         145       
                                                                 
=================================================================
Total params: 1,897
Trainable params: 1,897
Non-trainable params: 0
_________________________________________________________________

āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĻā§āĻŦāĻžāϰāĻž āωāĻ¤ā§āĻĒāĻžāĻĻāĻŋāϤ āϕ⧋āϞāĻžāĻšāϞāĻĒā§‚āĻ°ā§āĻŖ āϚāĻŋāĻ¤ā§āϰ āĻāĻŦāĻ‚ āĻ…āĻ¸ā§āĻŦā§€āĻ•ā§ƒāϤ āϚāĻŋāĻ¤ā§āϰ āωāĻ­āϝāĻŧāχ āĻĒā§āϞāϟ āĻ•āϰāĻžāĨ¤

encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):

    # display original + noise
    ax = plt.subplot(2, n, i + 1)
    plt.title("original + noise")
    plt.imshow(tf.squeeze(x_test_noisy[i]))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # display reconstruction
    bx = plt.subplot(2, n, i + n + 1)
    plt.title("reconstructed")
    plt.imshow(tf.squeeze(decoded_imgs[i]))
    plt.gray()
    bx.get_xaxis().set_visible(False)
    bx.get_yaxis().set_visible(False)
plt.show()

png

āϤ⧃āϤ⧀āϝāĻŧ āωāĻĻāĻžāĻšāϰāĻŖ: āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āϏāύāĻžāĻ•ā§āϤāĻ•āϰāĻŖ

āĻ“āĻ­āĻžāϰāĻ­āĻŋāω

āĻāχ āωāĻĻāĻžāĻšāϰāϪ⧇, āφāĻĒāύāĻŋ ECG5000 āĻĄā§‡āϟāĻžāϏ⧇āĻŸā§‡ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āϏāύāĻžāĻ•ā§āϤ āĻ•āϰāϤ⧇ āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻŦ⧇āύāĨ¤ āĻāχ āĻĄā§‡āϟāĻžāϏ⧇āĻŸā§‡ 5,000āϟāĻŋ āχāϞ⧇āĻ•ā§āĻŸā§āϰ⧋āĻ•āĻžāĻ°ā§āĻĄāĻŋāĻ“āĻ—ā§āϰāĻžāĻŽ āϰāϝāĻŧ⧇āϛ⧇, āĻĒā§āϰāϤāĻŋāϟāĻŋāϤ⧇ 140āϟāĻŋ āĻĄā§‡āϟāĻž āĻĒāϝāĻŧ⧇āĻ¨ā§āϟ āϰāϝāĻŧ⧇āϛ⧇⧎ āφāĻĒāύāĻŋ āĻĄā§‡āϟāĻžāϏ⧇āĻŸā§‡āϰ āĻāĻ•āϟāĻŋ āϏāϰāϞ⧀āĻ•ā§ƒāϤ āϏāĻ‚āĻ¸ā§āĻ•āϰāĻŖ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰāĻŦ⧇āύ, āϝ⧇āĻ–āĻžāύ⧇ āĻĒā§āϰāϤāĻŋāϟāĻŋ āωāĻĻāĻžāĻšāϰāĻŖāϕ⧇ 0 (āĻāĻ•āϟāĻŋ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ⧇āϰ āϏāĻžāĻĨ⧇ āϏāĻŽā§āĻĒāĻ°ā§āĻ•āĻŋāϤ), āĻŦāĻž 1 (āĻāĻ•āϟāĻŋ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ⧇āϰ āϏāĻžāĻĨ⧇ āϏāĻŽā§āĻĒāĻ°ā§āĻ•āĻŋāϤ) āϞ⧇āĻŦ⧇āϞ āĻ•āϰāĻž āĻšāϝāĻŧ⧇āϛ⧇āĨ¤ āφāĻĒāύāĻŋ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ āϏāύāĻžāĻ•ā§āϤ āĻ•āϰāϤ⧇ āφāĻ—ā§āϰāĻšā§€.

āĻ•āĻŋāĻ­āĻžāĻŦ⧇ āφāĻĒāύāĻŋ āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āϏāύāĻžāĻ•ā§āϤ āĻ•āϰāĻŦ⧇āύ? āĻŽāύ⧇ āϰāĻžāĻ–āĻŦ⧇āύ āϝ⧇ āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ⧇āϰ āĻ¤ā§āϰ⧁āϟāĻŋ āĻ•āĻŽāĻžāύ⧋āϰ āϜāĻ¨ā§āϝ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŋāϤāĨ¤ āφāĻĒāύāĻŋ āĻļ⧁āϧ⧁āĻŽāĻžāĻ¤ā§āϰ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ⧇ āĻāĻ•āϟāĻŋ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻŦ⧇āύ, āϤāĻžāϰāĻĒāϰ āϏāĻŽāĻ¸ā§āϤ āĻĄā§‡āϟāĻž āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ•āϰāϤ⧇ āĻāϟāĻŋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧁āύāĨ¤ āφāĻŽāĻžāĻĻ⧇āϰ āĻ…āύ⧁āĻŽāĻžāύ āĻšāϞ āϝ⧇ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ⧇ āωāĻšā§āϚāϤāϰ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ¤ā§āϰ⧁āϟāĻŋ āĻĨāĻžāĻ•āĻŦ⧇āĨ¤ āϤāĻžāϰāĻĒāϰ⧇ āφāĻĒāύāĻŋ āĻāĻ•āϟāĻŋ āĻ›āĻ¨ā§āĻĻāϕ⧇ āĻāĻ•āϟāĻŋ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āĻšāĻŋāϏāĻžāĻŦ⧇ āĻļā§āϰ⧇āĻŖā§€āĻŦāĻĻā§āϧ āĻ•āϰāĻŦ⧇āύ āϝāĻĻāĻŋ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ⧇āϰ āĻ¤ā§āϰ⧁āϟāĻŋ āĻāĻ•āϟāĻŋ āύāĻŋāĻ°ā§āĻĻāĻŋāĻˇā§āϟ āĻĨā§āϰ⧇āĻļāĻšā§‹āĻ˛ā§āĻĄāϕ⧇ āĻ…āϤāĻŋāĻ•ā§āϰāĻŽ āĻ•āϰ⧇āĨ¤

āχāϏāĻŋāϜāĻŋ āĻĄā§‡āϟāĻž āϞ⧋āĻĄ āĻ•āϰ⧁āύ

āφāĻĒāύāĻŋ āϝ⧇ āĻĄā§‡āϟāĻžāϏ⧇āϟāϟāĻŋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰāĻŦ⧇āύ āϤāĻž timeseriesclassification.com āĻĨ⧇āϕ⧇ āĻāĻ•āϟāĻŋāϰ āωāĻĒāϰ āĻ­āĻŋāĻ¤ā§āϤāĻŋ āĻ•āϰ⧇āĨ¤

# Download the dataset
dataframe = pd.read_csv('http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv', header=None)
raw_data = dataframe.values
dataframe.head()
# The last element contains the labels
labels = raw_data[:, -1]

# The other data points are the electrocadriogram data
data = raw_data[:, 0:-1]

train_data, test_data, train_labels, test_labels = train_test_split(
    data, labels, test_size=0.2, random_state=21
)

āĻĄā§‡āϟāĻžāϕ⧇ [0,1] āϤ⧇ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ•āϰ⧁āύāĨ¤

min_val = tf.reduce_min(train_data)
max_val = tf.reduce_max(train_data)

train_data = (train_data - min_val) / (max_val - min_val)
test_data = (test_data - min_val) / (max_val - min_val)

train_data = tf.cast(train_data, tf.float32)
test_data = tf.cast(test_data, tf.float32)

āφāĻĒāύāĻŋ āĻļ⧁āϧ⧁āĻŽāĻžāĻ¤ā§āϰ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āĻĻ⧇āĻŦ⧇āύ, āϝāĻž āĻāχ āĻĄā§‡āϟāĻžāϏ⧇āĻŸā§‡ 1 āĻšāĻŋāϏāĻžāĻŦ⧇ āϞ⧇āĻŦ⧇āϞ āĻ•āϰāĻž āĻšāϝāĻŧ⧇āϛ⧇āĨ¤ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻāϗ⧁āϞāĻŋāϕ⧇ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻ›āĻ¨ā§āĻĻ āĻĨ⧇āϕ⧇ āφāϞāĻžāĻĻāĻž āĻ•āϰ⧁āύāĨ¤

train_labels = train_labels.astype(bool)
test_labels = test_labels.astype(bool)

normal_train_data = train_data[train_labels]
normal_test_data = test_data[test_labels]

anomalous_train_data = train_data[~train_labels]
anomalous_test_data = test_data[~test_labels]

āĻāĻ•āϟāĻŋ āϏāĻžāϧāĻžāϰāĻŖ āχāϏāĻŋāϜāĻŋ āĻĒā§āϞāϟ āĻ•āϰ⧁āύāĨ¤

plt.grid()
plt.plot(np.arange(140), normal_train_data[0])
plt.title("A Normal ECG")
plt.show()

png

āĻāĻ•āϟāĻŋ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āχāϏāĻŋāϜāĻŋ āĻĒā§āϞāϟ āĻ•āϰ⧁āύāĨ¤

plt.grid()
plt.plot(np.arange(140), anomalous_train_data[0])
plt.title("An Anomalous ECG")
plt.show()

png

āĻŽāĻĄā§‡āϞ āϤ⧈āϰāĻŋ āĻ•āϰ⧁āύ

class AnomalyDetector(Model):
  def __init__(self):
    super(AnomalyDetector, self).__init__()
    self.encoder = tf.keras.Sequential([
      layers.Dense(32, activation="relu"),
      layers.Dense(16, activation="relu"),
      layers.Dense(8, activation="relu")])

    self.decoder = tf.keras.Sequential([
      layers.Dense(16, activation="relu"),
      layers.Dense(32, activation="relu"),
      layers.Dense(140, activation="sigmoid")])

  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)
    return decoded

autoencoder = AnomalyDetector()
autoencoder.compile(optimizer='adam', loss='mae')

āϞāĻ•ā§āĻˇā§āϝ āĻ•āϰ⧁āύ āϝ⧇ āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϕ⧇ āĻļ⧁āϧ⧁āĻŽāĻžāĻ¤ā§āϰ āϏāĻžāϧāĻžāϰāĻŖ āχāϏāĻŋāϜāĻŋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŋāϤ āĻ•āϰāĻž āĻšāϝāĻŧ, āϤāĻŦ⧇ āϏāĻŽā§āĻĒā§‚āĻ°ā§āĻŖ āĻĒāϰ⧀āĻ•ā§āώāĻžāϰ āϏ⧇āϟ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āĻŽā§‚āĻ˛ā§āϝāĻžāϝāĻŧāύ āĻ•āϰāĻž āĻšāϝāĻŧāĨ¤

history = autoencoder.fit(normal_train_data, normal_train_data, 
          epochs=20, 
          batch_size=512,
          validation_data=(test_data, test_data),
          shuffle=True)
Epoch 1/20
5/5 [==============================] - 1s 33ms/step - loss: 0.0576 - val_loss: 0.0531
Epoch 2/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0552 - val_loss: 0.0514
Epoch 3/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0519 - val_loss: 0.0499
Epoch 4/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0483 - val_loss: 0.0475
Epoch 5/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0445 - val_loss: 0.0451
Epoch 6/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0409 - val_loss: 0.0432
Epoch 7/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0377 - val_loss: 0.0415
Epoch 8/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0348 - val_loss: 0.0401
Epoch 9/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0319 - val_loss: 0.0388
Epoch 10/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0293 - val_loss: 0.0378
Epoch 11/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0273 - val_loss: 0.0369
Epoch 12/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0259 - val_loss: 0.0361
Epoch 13/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0249 - val_loss: 0.0354
Epoch 14/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0239 - val_loss: 0.0346
Epoch 15/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0230 - val_loss: 0.0340
Epoch 16/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0222 - val_loss: 0.0335
Epoch 17/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0215 - val_loss: 0.0331
Epoch 18/20
5/5 [==============================] - 0s 9ms/step - loss: 0.0211 - val_loss: 0.0331
Epoch 19/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0208 - val_loss: 0.0329
Epoch 20/20
5/5 [==============================] - 0s 8ms/step - loss: 0.0206 - val_loss: 0.0327
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
<matplotlib.legend.Legend at 0x7ff1d339b790>

png

āφāĻĒāύāĻŋ āĻļā§€āĻ˜ā§āϰāχ āĻāĻ•āϟāĻŋ ECG āϕ⧇ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻšāĻŋāϏāĻžāĻŦ⧇ āĻļā§āϰ⧇āĻŖā§€āĻŦāĻĻā§āϧ āĻ•āϰāĻŦ⧇āύ āϝāĻĻāĻŋ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ⧇āϰ āĻ¤ā§āϰ⧁āϟāĻŋ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāϪ⧇āϰ āωāĻĻāĻžāĻšāϰāĻŖ āĻĨ⧇āϕ⧇ āĻāĻ•āϟāĻŋ āφāĻĻāĻ°ā§āĻļ āĻŦāĻŋāĻšā§āϝ⧁āϤāĻŋāϰ āĻšā§‡āϝāĻŧ⧇ āĻŦ⧇āĻļāĻŋ āĻšāϝāĻŧāĨ¤ āĻĒā§āϰāĻĨāĻŽā§‡, āφāϏ⧁āύ āĻŸā§āϰ⧇āύāĻŋāĻ‚ āϏ⧇āϟ āĻĨ⧇āϕ⧇ āĻāĻ•āϟāĻŋ āϏāĻžāϧāĻžāϰāĻŖ āχāϏāĻŋāϜāĻŋ, āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰ āĻĻā§āĻŦāĻžāϰāĻž āĻāύāϕ⧋āĻĄ āĻāĻŦāĻ‚ āĻĄāĻŋāϕ⧋āĻĄ āĻ•āϰāĻžāϰ āĻĒāϰ⧇ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻāĻŦāĻ‚ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ¤ā§āϰ⧁āϟāĻŋāϰ āĻĒā§āϞāϟ āĻ•āϰāĻž āϝāĻžāĻ•āĨ¤

encoded_data = autoencoder.encoder(normal_test_data).numpy()
decoded_data = autoencoder.decoder(encoded_data).numpy()

plt.plot(normal_test_data[0], 'b')
plt.plot(decoded_data[0], 'r')
plt.fill_between(np.arange(140), decoded_data[0], normal_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()

png

āĻāĻ•āϟāĻŋ āĻ…āύ⧁āϰ⧂āĻĒ āĻĒā§āϞāϟ āϤ⧈āϰāĻŋ āĻ•āϰ⧁āύ, āĻāχ āϏāĻŽāϝāĻŧ āĻāĻ•āϟāĻŋ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻĒāϰ⧀āĻ•ā§āώāĻžāϰ āωāĻĻāĻžāĻšāϰāϪ⧇āϰ āϜāĻ¨ā§āϝāĨ¤

encoded_data = autoencoder.encoder(anomalous_test_data).numpy()
decoded_data = autoencoder.decoder(encoded_data).numpy()

plt.plot(anomalous_test_data[0], 'b')
plt.plot(decoded_data[0], 'r')
plt.fill_between(np.arange(140), decoded_data[0], anomalous_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()

png

āĻ…āϏāĻ™ā§āĻ—āϤāĻŋāϗ⧁āϞāĻŋ āϏāύāĻžāĻ•ā§āϤ āĻ•āϰ⧁āύ

āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ⧇āϰ āĻ•ā§āώāϤāĻŋ āĻāĻ•āϟāĻŋ āύāĻŋāĻ°ā§āĻĻāĻŋāĻˇā§āϟ āĻĨā§āϰ⧇āĻļāĻšā§‹āĻ˛ā§āĻĄā§‡āϰ āĻšā§‡āϝāĻŧ⧇ āĻŦ⧇āĻļāĻŋ āĻ•āĻŋāύāĻž āϤāĻž āĻ—āĻŖāύāĻž āĻ•āϰ⧇ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋāϗ⧁āϞāĻŋ āϏāύāĻžāĻ•ā§āϤ āĻ•āϰ⧁āύ⧎ āĻāχ āϟāĻŋāωāĻŸā§‹āϰāĻŋāϝāĻŧāĻžāϞ⧇, āφāĻĒāύāĻŋ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āϏ⧇āϟ āĻĨ⧇āϕ⧇ āϏāĻžāϧāĻžāϰāĻŖ āωāĻĻāĻžāĻšāϰāϪ⧇āϰ āĻ—āĻĄāĻŧ āĻ—āĻĄāĻŧ āĻ¤ā§āϰ⧁āϟāĻŋ āĻ—āĻŖāύāĻž āĻ•āϰāĻŦ⧇āύ, āϤāĻžāϰāĻĒāϰ āĻ­āĻŦāĻŋāĻˇā§āϝāϤ⧇āϰ āωāĻĻāĻžāĻšāϰāĻŖāϗ⧁āϞāĻŋāϕ⧇ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āĻšāĻŋāϏāĻžāĻŦ⧇ āĻļā§āϰ⧇āĻŖā§€āĻŦāĻĻā§āϧ āĻ•āϰāĻŦ⧇āύ āϝāĻĻāĻŋ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ⧇āϰ āĻ¤ā§āϰ⧁āϟāĻŋāϟāĻŋ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āϏ⧇āϟ āĻĨ⧇āϕ⧇ āĻāĻ•āϟāĻŋ āφāĻĻāĻ°ā§āĻļ āĻŦāĻŋāĻšā§āϝ⧁āϤāĻŋāϰ āĻšā§‡āϝāĻŧ⧇ āĻŦ⧇āĻļāĻŋ āĻšāϝāĻŧāĨ¤

āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ āϏ⧇āϟ āĻĨ⧇āϕ⧇ āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āχāϏāĻŋāϜāĻŋāϤ⧇ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ⧇āϰ āĻ¤ā§āϰ⧁āϟāĻŋ āĻĒā§āϞāϟ āĻ•āϰ⧁āύ

reconstructions = autoencoder.predict(normal_train_data)
train_loss = tf.keras.losses.mae(reconstructions, normal_train_data)

plt.hist(train_loss[None,:], bins=50)
plt.xlabel("Train loss")
plt.ylabel("No of examples")
plt.show()

png

āĻāĻ•āϟāĻŋ āĻĨā§āϰ⧇āĻļāĻšā§‹āĻ˛ā§āĻĄ āĻŽāĻžāύ āϚāϝāĻŧāύ āĻ•āϰ⧁āύ āϝāĻž āĻ—āĻĄāĻŧ āĻĨ⧇āϕ⧇ āωāĻĒāϰ⧇ āĻāĻ•āϟāĻŋ āφāĻĻāĻ°ā§āĻļ āĻŦāĻŋāĻšā§āϝ⧁āϤāĻŋāĨ¤

threshold = np.mean(train_loss) + np.std(train_loss)
print("Threshold: ", threshold)
Threshold:  0.03241627

āφāĻĒāύāĻŋ āϝāĻĻāĻŋ āĻĒāϰ⧀āĻ•ā§āώāĻžāϰ āϏ⧇āĻŸā§‡ āĻ…āĻ¸ā§āĻŦāĻžāĻ­āĻžāĻŦāĻŋāĻ• āωāĻĻāĻžāĻšāϰāĻŖāϗ⧁āϞāĻŋāϰ āϜāĻ¨ā§āϝ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ¤ā§āϰ⧁āϟāĻŋ āĻĒāϰ⧀āĻ•ā§āώāĻž āĻ•āϰ⧇āύ, āφāĻĒāύāĻŋ āϞāĻ•ā§āĻˇā§āϝ āĻ•āϰāĻŦ⧇āύ āϝ⧇ āĻ…āϧāĻŋāĻ•āĻžāĻ‚āĻļ⧇āϰ āĻĨā§āϰ⧇āĻļāĻšā§‹āĻ˛ā§āĻĄā§‡āϰ āĻšā§‡āϝāĻŧ⧇ āĻŦ⧇āĻļāĻŋ āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ¤ā§āϰ⧁āϟāĻŋ āϰāϝāĻŧ⧇āϛ⧇āĨ¤ āĻĨā§āϰ⧇āĻļāĻšā§‹āĻ˛ā§āĻĄ āĻĒāϰāĻŋāĻŦāĻ°ā§āϤāύ āĻ•āϰ⧇, āφāĻĒāύāĻŋ āφāĻĒāύāĻžāϰ āĻļā§āϰ⧇āĻŖā§€āĻŦāĻĻā§āϧāĻ•āĻžāϰ⧀āϰ āύāĻŋāĻ°ā§āϭ⧁āϞāϤāĻž āĻāĻŦāĻ‚ āĻĒā§āϰāĻ¤ā§āϝāĻžāĻšāĻžāϰ āϏāĻŽāĻ¨ā§āĻŦāϝāĻŧ āĻ•āϰāϤ⧇ āĻĒāĻžāϰ⧇āύāĨ¤

reconstructions = autoencoder.predict(anomalous_test_data)
test_loss = tf.keras.losses.mae(reconstructions, anomalous_test_data)

plt.hist(test_loss[None, :], bins=50)
plt.xlabel("Test loss")
plt.ylabel("No of examples")
plt.show()

png

āĻĒ⧁āύāĻ°ā§āĻ—āĻ āύ āĻ¤ā§āϰ⧁āϟāĻŋ āĻĨā§āϰ⧇āĻļāĻšā§‹āĻ˛ā§āĻĄā§‡āϰ āĻšā§‡āϝāĻŧ⧇ āĻŦ⧇āĻļāĻŋ āĻšāϞ⧇ āĻāĻ•āϟāĻŋ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āĻšāĻŋāϏāĻžāĻŦ⧇ āĻāĻ•āϟāĻŋ ECG āĻļā§āϰ⧇āĻŖā§€āĻŦāĻĻā§āϧ āĻ•āϰ⧁āύāĨ¤

def predict(model, data, threshold):
  reconstructions = model(data)
  loss = tf.keras.losses.mae(reconstructions, data)
  return tf.math.less(loss, threshold)

def print_stats(predictions, labels):
  print("Accuracy = {}".format(accuracy_score(labels, predictions)))
  print("Precision = {}".format(precision_score(labels, predictions)))
  print("Recall = {}".format(recall_score(labels, predictions)))
preds = predict(autoencoder, test_data, threshold)
print_stats(preds, test_labels)
Accuracy = 0.944
Precision = 0.9921875
Recall = 0.9071428571428571

āĻĒāϰāĻŦāĻ°ā§āϤ⧀ āĻĒāĻĻāĻ•ā§āώ⧇āĻĒ

āĻ…āĻŸā§‹āĻāύāϕ⧋āĻĄāĻžāϰāϗ⧁āϞāĻŋāϰ āϏāĻžāĻĨ⧇ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āϏāύāĻžāĻ•ā§āϤāĻ•āϰāĻŖ āϏāĻŽā§āĻĒāĻ°ā§āϕ⧇ āφāϰāĻ“ āϜāĻžāύāϤ⧇, āĻ­āĻŋāĻ•ā§āϟāϰ āĻĄāĻŋāĻŦāĻŋāϝāĻŧāĻžāϰ TensorFlow.js āĻāϰ āϏāĻžāĻĨ⧇ āύāĻŋāĻ°ā§āĻŽāĻŋāϤ āĻāχ āĻĻ⧁āĻ°ā§āĻĻāĻžāĻ¨ā§āϤ āχāĻ¨ā§āϟāĻžāϰ⧇āĻ•ā§āϟāĻŋāĻ­ āωāĻĻāĻžāĻšāϰāĻŖāϟāĻŋ āĻĻ⧇āϖ⧁āύāĨ¤ āĻŦāĻžāĻ¸ā§āϤāĻŦ-āĻŦāĻŋāĻļā§āĻŦ āĻŦā§āϝāĻŦāĻšāĻžāϰ⧇āϰ āĻ•ā§āώ⧇āĻ¤ā§āϰ⧇, āφāĻĒāύāĻŋ āĻļāĻŋāĻ–āϤ⧇ āĻĒāĻžāϰ⧇āύ āϝ⧇ āϕ⧀āĻ­āĻžāĻŦ⧇ āĻāϝāĻŧāĻžāϰāĻŦāĻžāϏ āĻŸā§‡āύāϏāϰāĻĢā§āϞ⧋ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āφāχāĻāϏāĻāϏ āĻŸā§‡āϞāĻŋāĻŽā§‡āĻŸā§āϰāĻŋ āĻĄā§‡āϟāĻžāϤ⧇ āĻ…āϏāĻ™ā§āĻ—āϤāĻŋ āϏāύāĻžāĻ•ā§āϤ āĻ•āϰ⧇āĨ¤ āĻŦ⧇āϏāĻŋāĻ• āϏāĻŽā§āĻĒāĻ°ā§āϕ⧇ āφāϰāĻ“ āϜāĻžāύāϤ⧇, āĻĢā§āϰāĻžāρāϏ⧋āϝāĻŧāĻž āĻšā§‹āϞ⧇āĻŸā§‡āϰ āĻāχ āĻŦā§āϞāĻ— āĻĒā§‹āĻ¸ā§āϟāϟāĻŋ āĻĒāĻĄāĻŧāĻžāϰ āĻ•āĻĨāĻž āĻŦāĻŋāĻŦ⧇āϚāύāĻž āĻ•āϰ⧁āύāĨ¤ āφāϰāĻ“ āĻŦāĻŋāĻļāĻĻ āĻŦāĻŋāĻŦāϰāϪ⧇āϰ āϜāĻ¨ā§āϝ, āχāϝāĻŧāĻžāύ āϗ⧁āĻĄāĻĢ⧇āϞ⧋, āχāϝāĻŧā§‹āĻļ⧁āϝāĻŧāĻž āĻŦ⧇āĻ™ā§āĻ—āĻŋāĻ“ āĻāĻŦāĻ‚ āĻ…ā§āϝāĻžāϰāύ āϕ⧋āϰāĻ­āĻŋāϞ⧇āϰ āĻĄāĻŋāĻĒ āϞāĻžāĻ°ā§āύāĻŋāĻ‚ āĻĨ⧇āϕ⧇ āĻ…āĻ§ā§āϝāĻžāϝāĻŧ 14 āĻĻ⧇āϖ⧁āύāĨ¤