We often have a dataset of data following a common path, but each of the data has a standard deviation that makes it scattered along the line of best fit. We can get one line using the curvefit ()
function.
Using SciPy:
Scipy — is a Python scientific computing module that provides builtin functions for many wellknown mathematical functions. The scipy.optimize
package gives us a lot of optimization routines. A detailed list of all Optimize functions can be found by typing the following in the iPython console:
help (scipy.optimize)
Among the most commonly used methods — least squares minimization, curve fitting, multidimensional scalar minimization, etc.
Curve fitting example —
Login :
Output:
Login:
Output:
As you can see from the input, the dataset appears to be scattered across a sinusoidal function in the first case and an exponential function in the second case, CurveFit lends legitimacy to the features and determines the coefficients to ensure the line of best fit.
Code showing the generation of the first example —

Exit:
Sine function coefficients: [3.66474998 1.32876756] Covariance of coefficients: [[5.43766857e02 3.69114170e05] [3.69114170e05 1.02824503e04]]
The second example can be achieved with the numpy exponential function shown below:
x
=
np.linspace (
0
,
1
, num
=
40
)
y
=
3.45
*
np. exp (
1.334
*
x)
+
np.random.normal (size
=
40
)
def
test (x, a, b):
return
a
*
np.exp (b
*
x)
param, param_cov
=
curve_fit (test, x, y)
However, if the coefficients are too high, the curve flattens and does not provide the best fit. The following code explains this fact:

Output:
Sine funcion coefficients: [0.70867169 0.7346216] Covariance of coefficients: [[2.87320136 0.05245869] [0.05245869 0.14094361]]
The blue dashed line is undoubtedly the line with optimally optimized distances from all points in the dataset, but it does not provide the best fit sine function.
Curve fitting should not be confused with regression. Both of them include data approximation with functions. But the purpose of curve fitting is to provide values for a dataset with which a given set of explanatory variables can actually represent another variable. Regression — this is a special case of curve fitting, but here you just don’t want a curve that best fits the training data (which can lead to overfitting), but a model that can generalize the training and thus predict new points. effectively.