Let`s discuss padding and its types in convolution layers. In the convolution layer we have kernels, and to make the final filter more informative, we use padding in the image matrix or any kind of input array. We have three types of padding:
Let`s assume the kernel is a sliding window. We have to come up with a solution to padding zeros on the input array. This is a very famous implementation, and it will be easier to show how it works with a simple example, consider x as a filter and h as an input array.
x [i] = [6, 2]
h [ i] = [1, 2, 5, 4]
Using zero padding, we can compute convolution.
You must invert the filter x, otherwise the operation will be crosscorrelation. First step (now with zero padding):
= 2 * 0 + 6 * 1 = 6
Second step:
= 2 * 1 + 6 * 2 = 14
Third step:
= 2 * 2 + 6 * 5 = 34
Fourth step:
= 2 * 5 + 6 * 4 = 34
Fifth step:
= 2 * 4 + 6 * 0 = 8
The result of the convolution for this case listing all the above steps will be: Y = [6 14 34 34 8]

Output:
[6 14 34 34 8]
In this type of filling we add zero only to the left of the array and at the top of the 2D input matrix.

Output:
[6 14 34 34]
In this padding type, we got a reduced output matrix because the size of the output the array is decreasing. We only used the kernel when we had a compatible position in the h array, in some cases you want to reduce the dimension.

Exit:
[14 34 34]