numpy.unwrap () in Python

Learn examples from DataCamp guru:

AutoEncoder — it is a data compression and decompression algorithm implemented with neural networks and / or

# Install TensorFlow 2.0 using the following command
# To install the processor
# pip install -q tensorflow == 2.0
# To install the GPU (CUDA and CuDNN must be available)
# pip install -q tensorflow-gpu == 2.0

 

from __ future__ import absolute_import

from __ future__ import division

from __ future__ import print_function

 

 

import tensorflow as tf

print (tf .__ version__)

After confirming the appropriate TF loading, introduce other data augmentation dependencies and define custom functions as shown below. A standard scaler cleans the data by forming columns. The get_random_block_from_data function is useful when used to transform AutoDiff (automatic differentiation) to get gradients.

import numpy as np

import sklearn.preprocessing as prep

import tensorflow.keras.layers as layers

 

def standard_scale (X_train, X_test):

preprocessor = prep.StandardScaler (). fit (X_train)

X_train = prep rocessor.transform (X_train)

X_test = preprocessor.transform (X_test)

return X_train, X_test

  

def get_random_block_from_data (data, batch_size):

  start_index = np.random.randint ( 0 , len (data) - batch_size)

return data [start_index: (start_index + batch_size)]

AutoEncoders can have a lossy intermediate representation, also known as compressed representation. Such dimensionality reduction is useful in a variety of cases where lossless compression of image data exists. Thus, we can say that the encoding part of AutoEncoder encodes a dense representation of the data. Here we will use the TensorFlow Subclass API to define custom layers for the encoder and decoder.

class Encoder (tf.keras.layers.Layer):

& # 39 ; & # 39; & # 39; Encodes a digit from the MNIST dataset & # 39; & # 39; & # 39;

 

def __ init __ ( self ,

n_dims,

name = `encoder` ,

* * kwargs):

super (Encoder, self ) .__ init __ (name = name, * * kwargs)

  self . n_dims = n_dims

self . n_layers = 1

  self . encode_layer = layers.Dense (n_dims, activation = `relu` )

  

  @ tf . function 

def call ( self , inputs):

  return self . encode_layer (inputs )

 

class Decoder (tf.keras.layers.Layer):

  & # 39; & # 39; & # 39; Decodes a digit from the MNIST dataset & # 39; & # 39; & # 39;

 

def __ init __ ( self ,

n_dims,

name = `decoder` ,

* * kwargs):

super (Decoder, self ) .__ init __ (name = name, * * kwargs)

  self . n_dims = n_dims

self . n_layers = len (n_dims)

self . decode_middle = layers.Dense (n_dims [ 0 ], activation = ` relu` )

self . recon_layer = layers.Dense (n_dims [ 1 ] , activatio n = `sigmoid` )

 

@ tf . function 

def call ( self , inputs):

x = self . decode_middle (inputs)

return self . recon_layer (x)

We then expand to define a custom model that uses our previously defined custom layers to generate the AutoEn model coder. The call function is overridden, which is a direct pass when the data is made available to the model object. Notice the decorator function. This ensures that the function is executed in a graph, which speeds up our execution.

class Autoencoder (tf. keras.Model):

& # 39; & # 39; & # 39; Vanilla autoencoder for digits MNIST & # 39; & # 39; & # 39;

 

def __ init __ ( self ,

n_dims = [ 200 , 392 , 784 ],

name = `autoencoder` ,

* * kwargs):

super (Autoencoder, self ) .__ init __ (name = name, * * kwargs)

self . n_dims = n_dims

self . encoder = Encoder (n_dims [ 0 ])

  self . decoder = Decoder ([n_dims [ 1 ], n_dims [ 2 ] ])

  

  @ tf . function 

def call ( self , inputs):

  x = self . encoder (inputs)

return self . decoder (x)

The next block of code prepares the dataset and prepares the data to be passed to the preprocessing pipeline before training AutoEncoder.

mnist = tf.keras.datasets.mnist

 

(X_train, _), (X_test, _) = mnist.load_data ()

X_train = tf.cast (np.reshape (

X_train, (X_train.shape [ 0 ], 

X_train.shape [ 1 ] * X_train.shape [ 2 ])), tf.float64)

X_test = tf.cast (

np.reshap e (X_test, 

(X_test.shape [ 0 ], 

  X_test.shape [ 1 ] * X_test.shape [ 2 ])), tf.float64)

 

X_train, X_test = standard_scale (X_train, X_test)

It is recommended to use TensorFlow to quickly obtain mixed batch tensor slices from a training dataset. The following code block demonstrates the use of tf.data and also defines hyperparameters for training the AutoEncoder model.

train_data = tf.data.Dataset.from_tensor_slices (

X_train) .batch ( 128 ). Shuffle (buffer_size = 1024 )

test_data = tf.data.Dataset.from_tensor_slices (

X_test) .batch ( 128 ). shuffle (buffer_size = 512 )

  

n_samples = int ( len (X_train) + len ( X_test))

training_epochs = 20

batch_size = 128

display_step = 1

 

optimizer = tf.optimizers.Adam (learning_rate = 0.01 )

mse_l oss = tf.keras.losses.MeanSquaredError ()

loss_metric = tf.keras.metrics.Mean ()

We have met all the prerequisites to train our AutoEncoder model! All we have left to do is — it is to define an AutoEncoder object and compile the optimizer and lossy model before calling model.train for it for the hyperparameters defined above. Voila! You can see loss reduction and AutoEncoder improving your performance!

ae = Autoencoder ([ 200 , 392 , 784 ])

ae. compile (optimizer = tf.optimizers.Adam ( 0.01 ), 

loss = `categorical_crossentropy` )

ae.fit (X_train, X_train, batch_size = 64 , epochs = 5 )

You can check out the IPython notebook here and by the colab demo I provided TensorFlow here . Follow me on GitHub .

Learn examples from DataCamp guru: