fit (object, x = NULL, y = NULL, batch_size = NULL, epochs = 10, verbose = getOption ("keras.fit_verbose", default = 1), callbacks = NULL, view_metrics = getOption ("keras.view_metrics", default = "auto"), validation_split = 0, validation_data = NULL, shuffle = TRUE, class_weight = NULL, sample_weight = NULL, initial_epoch = 0, steps_per_epoch = NULL, validation_steps = NULL, ...)
Understanding a few important arguments: p>
- & gt; object : the model to train. - & gt; X : our training data. Can be Vector, array or matrix - & gt; Y : our training labels. Can be Vector, array or matrix - & gt; Batch_size : it can take any integer value or NULL and by default, it will be set to 32. It specifies no. of samples per gradient. - & gt; Epochs : an integer and number of epochs we want to train our model for. - & gt; Verbose : specifies verbosity mode (0 = silent, 1 = progress bar, 2 = one line per epoch). - & gt; Shuffle : whether we want to shuffle our training data before each epoch. - & gt; steps_per_epoch : it specifies the total number of steps taken before one epoch has finished and started the next epoch. By default it values is set to NULL.
How to use Keras fit:
model.fit (Xtrain, Ytrain, batch_size = 32, epochs = 100)
Here we first enter training data (Xtrain) and training shortcuts (Ytrain). We then use Keras to train our model for 100 epochs with a batch_size of 32.
When we call the .fit () function, it makes assumptions:
fit_generator (object, generator, steps_per_epoch, epochs = 1, verbose = getOption ("keras.fit_verbose", default = 1), callbacks = NULL, view_metrics = getOption ("keras.view_metrics", default = "auto"), validation_data = NULL, validation_steps = NULL, class_weight = NULL, max_queue_size = 10, workers = 1, initial_epoch = 0)
Understanding several important arguments:
- & gt; object : the Keras Object model. - & gt; generator : a generator whose output must be a list of the form: - (inputs, targets) - (input, targets, sample_weights) a single output of the generator makes a single batch and hence all arrays in the list must be having the length equal to the size of the batch. The generator is expected to loop over its data infinite no. of times, it should never return or exit. - & gt; steps_per_epoch : it specifies the total number of steps taken from the generator as soon as one epoch is finished and next epoch has started. We can calculate the value of steps_per_epoch as the total number of samples in your dataset divided by the batch size. - & gt; Epochs : an integer and number of epochs we want to train our model for. - & gt; Verbose : specifies verbosity mode (0 = silent, 1 = progress bar, 2 = one line per epoch). - & gt; callbacks : a list of callback functions applied during the training of our model. - & gt; validation_data can be either: - an inputs and targets list - a generator - an inputs, targets and sample_weights list which can be used to evaluate the loss and metrics for any model after any epoch has ended. - & gt; validation_steps : only if the validation_data is a generator then only this argument can be used. It specifies the total number of steps taken from the generator before it is stopped at every epoch and its value is calculated as the total number of training data points in your dataset divided by the batch size.
How to use Keras fit_generator:
# performing data argumentation by training image generator dataAugmentaion = ImageDataGenerator (rotation_range = 30, zoom_range = 0.20, fill_mode = "nearest ", shear_range = 0.20, horizontal_flip = True, width_shift_range = 0.1, height_shift_range = 0.1) # training the model model.fit_generator (dataAugmentaion.flow (trainX, trainY, batch_size = 32), validation_data = (testX, testY), steps_per_epoch = len (trainX) // 32, epochs = 10)
Here we train our network for 10 epochs, and the default batch size is 32.
For small and less complex datasets, it is recommended to use function keras.fit, whereas when working with real datasets it is not so easy, because real datasets are huge in size and much more difficult to fit into computer memory.
These datasets are more difficult to work with, and an important step in working with these datasets is to increase the amount of data to avoid overfitting the model, and to improve the generalizability of our model.
Data Augmentation — is a technique to artificially create a new training dataset from an existing training dataset to improve the performance of a deep learning neural network using the amount of data available. It is a form of regularization that makes our model generalized better than before.
Here we have used the Keras ImageDataGenerator object to apply data padding to randomly translate, resize, rotate, etc. Images. Each new batch of our data is tuned randomly according to the parameters provided by ImageDataGenerator .
When we call the .fit_generator () function it makes assumptions:
So, we learned the difference between Keras.fit and Keras.fit_generator functions used to train a neural network deep learning.
.fit is used when the entire training dataset can fit into memory and no data augmentation is applied.
.fit_generator is used when either we have a huge dataset to fit in our memory, or when data augmentation needs to be applied.