# Implement sigmoid function with Numpy

id function | NumPy | Python Methods and Functions | sigmoid function

With the Sigmoid activation function, we can reduce the loss during training because it eliminates the gradient problem in the machine learning model during training.

` `

` # Matplotlib, numpy and math imports import matplotlib.pyplot as plt import numpy as np import math    x = np. linspace ( - 10 , 10 , 100 ) z = 1 / ( 1 + np.exp ( - x))   plt.plot (x, z) plt. xlabel ( "x" ) plt .ylabel ( "Sigmoid (X)" )   < br /> plt.show () `

` `

Output:

Example # 1:

` `

 ` # Import matplotlib, numpy and mathematics ` ` import ` ` matplotlib.pyplot as plt ` ` import ` ` numpy as np ` ` import ` ` math `   ` x ` ` = ` ` np.linspace (` ` - ` ` 100 ` `, ` ` 100 ` `, ` ` 200 ` `) ` ` z ` ` = ` ` 1 ` ` / ` ` (` ` 1 ` ` + ` ` np.exp (` ` - ` ` x)) `   ` plt.plot (x, z) ` ` plt. xlabel (` ` "x" ` `) ` ` plt .ylabel (` ` "Sigmoid (X)" ` `) ` ` `  ` plt.show () `
` `

` `

Output:

## How to calculate a logistic sigmoid function in Python?

This is a logistic sigmoid function:

I know x. How can I calculate F(x) in Python now?

Let"s say x = 0.458.

F(x) = ?

## What"s the pythonic way to use getters and setters?

The "Pythonic" way is not to use "getters" and "setters", but to use plain attributes, like the question demonstrates, and `del` for deleting (but the names are changed to protect the innocent... builtins):

``````value = "something"

obj.attribute = value
value = obj.attribute
del obj.attribute
``````

If later, you want to modify the setting and getting, you can do so without having to alter user code, by using the `property` decorator:

``````class Obj:
"""property demo"""
#
@property            # first decorate the getter method
def attribute(self): # This getter method name is *the* name
return self._attribute
#
@attribute.setter    # the property decorates with `.setter` now
def attribute(self, value):   # name, e.g. "attribute", is the same
self._attribute = value   # the "value" name isn"t special
#
@attribute.deleter     # decorate with `.deleter`
def attribute(self):   # again, the method name is the same
del self._attribute
``````

(Each decorator usage copies and updates the prior property object, so note that you should use the same name for each set, get, and delete function/method.

After defining the above, the original setting, getting, and deleting code is the same:

``````obj = Obj()
obj.attribute = value
the_value = obj.attribute
del obj.attribute
``````

You should avoid this:

``````def set_property(property,value):
def get_property(property):
``````

Firstly, the above doesn"t work, because you don"t provide an argument for the instance that the property would be set to (usually `self`), which would be:

``````class Obj:

def set_property(self, property, value): # don"t do this
...
def get_property(self, property):        # don"t do this either
...
``````

Secondly, this duplicates the purpose of two special methods, `__setattr__` and `__getattr__`.

Thirdly, we also have the `setattr` and `getattr` builtin functions.

``````setattr(object, "property_name", value)
getattr(object, "property_name", default_value)  # default is optional
``````

The `@property` decorator is for creating getters and setters.

For example, we could modify the setting behavior to place restrictions the value being set:

``````class Protective(object):

@property
def protected_value(self):
return self._protected_value

@protected_value.setter
def protected_value(self, value):
if acceptable(value): # e.g. type or range check
self._protected_value = value
``````

In general, we want to avoid using `property` and just use direct attributes.

This is what is expected by users of Python. Following the rule of least-surprise, you should try to give your users what they expect unless you have a very compelling reason to the contrary.

## Demonstration

For example, say we needed our object"s protected attribute to be an integer between 0 and 100 inclusive, and prevent its deletion, with appropriate messages to inform the user of its proper usage:

``````class Protective(object):
"""protected property demo"""
#
def __init__(self, start_protected_value=0):
self.protected_value = start_protected_value
#
@property
def protected_value(self):
return self._protected_value
#
@protected_value.setter
def protected_value(self, value):
if value != int(value):
raise TypeError("protected_value must be an integer")
if 0 <= value <= 100:
self._protected_value = int(value)
else:
raise ValueError("protected_value must be " +
"between 0 and 100 inclusive")
#
@protected_value.deleter
def protected_value(self):
raise AttributeError("do not delete, protected_value can be set to 0")
``````

(Note that `__init__` refers to `self.protected_value` but the property methods refer to `self._protected_value`. This is so that `__init__` uses the property through the public API, ensuring it is "protected".)

And usage:

``````>>> p1 = Protective(3)
>>> p1.protected_value
3
>>> p1 = Protective(5.0)
>>> p1.protected_value
5
>>> p2 = Protective(-5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
File "<stdin>", line 15, in protected_value
ValueError: protectected_value must be between 0 and 100 inclusive
>>> p1.protected_value = 7.3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 17, in protected_value
TypeError: protected_value must be an integer
>>> p1.protected_value = 101
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 15, in protected_value
ValueError: protectected_value must be between 0 and 100 inclusive
>>> del p1.protected_value
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 18, in protected_value
AttributeError: do not delete, protected_value can be set to 0
``````

## Do the names matter?

Yes they do. `.setter` and `.deleter` make copies of the original property. This allows subclasses to properly modify behavior without altering the behavior in the parent.

``````class Obj:
"""property demo"""
#
@property
def get_only(self):
return self._attribute
#
@get_only.setter
def get_or_set(self, value):
self._attribute = value
#
@get_or_set.deleter
def get_set_or_delete(self):
del self._attribute
``````

Now for this to work, you have to use the respective names:

``````obj = Obj()
# obj.get_only = "value" # would error
obj.get_or_set = "value"
obj.get_set_or_delete = "new value"
the_value = obj.get_only
del obj.get_set_or_delete
# del obj.get_or_set # would error
``````

I"m not sure where this would be useful, but the use-case is if you want a get, set, and/or delete-only property. Probably best to stick to semantically same property having the same name.

## Conclusion

If you later need functionality around the setting, getting, and deleting, you can add it with the property decorator.

Avoid functions named `set_...` and `get_...` - that"s what properties are for.

TLDR: The idiomatic equivalent of a `void` return type annotation is `-> None`.

``````def foo() -> None:
...
``````

This matches that a function without `return` or just a bare `return` evaluates to `None`.

``````def void_func():  # unannotated void function
pass

print(void_func())  # None
``````

Omitting the return type does not mean that there is no return value. As per PEP 484:

For a checked function, the default annotation for arguments and for the return type is `Any`.

This means the value is considered dynamically typed and statically supports any operation. That is practically the opposite meaning of `void`.

Type hinting in Python does not strictly require actual types. For example, annotations may use strings of type names: `Union[str, int]`, `Union[str, "int"]`, `"Union[str, int]"` and various variants are equivalent.

Similarly, the type annotation `None` is considered to mean "is of `NoneType`". This can be used not just for return types, though you will see it most often there:

``````bar : None

def foo(baz: None) -> None:
return None
``````

This also applies to generic types. For example, you can use `None` in `Generator[int, None, None]` to indicate a generator does not take or return values.

Even though PEP 484 suggests that `None` means `type(None)`, you should not use the latter form explicitly. The type hinting specification does not include any form of `type(...)`. This is technically a runtime expression, and its support is entirely up to the type checker. The `mypy` project is considering to remove support for `type(None)` and remove it from 484 as well.

Or maybe we should update PEP 484 to not suggest that `type(None)` is valid as a type, and `None` is the only correct spelling? There should one -- and preferably only one -- obvious way to do it etc.

It is also available in scipy: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html

``````In [1]: from scipy.stats import logistic

In [2]: logistic.cdf(0.458)
Out[2]: 0.61253961344091512
``````

which is only a costly wrapper (because it allows you to scale and translate the logistic function) of another scipy function:

``````In [3]: from scipy.special import expit

In [4]: expit(0.458)
Out[4]: 0.61253961344091512
``````

If you are concerned about performances continue reading, otherwise just use `expit`.

## Some benchmarking:

``````In [5]: def sigmoid(x):
....:     return 1 / (1 + math.exp(-x))
....:

In [6]: %timeit -r 1 sigmoid(0.458)
1000000 loops, best of 1: 371 ns per loop

In [7]: %timeit -r 1 logistic.cdf(0.458)
10000 loops, best of 1: 72.2 ¬µs per loop

In [8]: %timeit -r 1 expit(0.458)
100000 loops, best of 1: 2.98 ¬µs per loop
``````

As expected `logistic.cdf` is (much) slower than `expit`. `expit` is still slower than the python `sigmoid` function when called with a single value because it is a universal function written in C ( http://docs.scipy.org/doc/numpy/reference/ufuncs.html ) and thus has a call overhead. This overhead is bigger than the computation speedup of `expit` given by its compiled nature when called with a single value. But it becomes negligible when it comes to big arrays:

``````In [9]: import numpy as np

In [10]: x = np.random.random(1000000)

In [11]: def sigmoid_array(x):
....:    return 1 / (1 + np.exp(-x))
....:
``````

(You"ll notice the tiny change from `math.exp` to `np.exp` (the first one does not support arrays, but is much faster if you have only one value to compute))

``````In [12]: %timeit -r 1 -n 100 sigmoid_array(x)
100 loops, best of 1: 34.3 ms per loop

In [13]: %timeit -r 1 -n 100 expit(x)
100 loops, best of 1: 31 ms per loop
``````

But when you really need performance, a common practice is to have a precomputed table of the the sigmoid function that hold in RAM, and trade some precision and memory for some speed (for example: http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/ )

Also, note that `expit` implementation is numerically stable since version 0.14.0: https://github.com/scipy/scipy/issues/3385

## How to calculate a logistic sigmoid function in Python?

This is a logistic sigmoid function:

I know x. How can I calculate F(x) in Python now?

Let"s say x = 0.458.

F(x) = ?

It is also available in scipy: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html

``````In [1]: from scipy.stats import logistic

In [2]: logistic.cdf(0.458)
Out[2]: 0.61253961344091512
``````

which is only a costly wrapper (because it allows you to scale and translate the logistic function) of another scipy function:

``````In [3]: from scipy.special import expit

In [4]: expit(0.458)
Out[4]: 0.61253961344091512
``````

If you are concerned about performances continue reading, otherwise just use `expit`.

## Some benchmarking:

``````In [5]: def sigmoid(x):
....:     return 1 / (1 + math.exp(-x))
....:

In [6]: %timeit -r 1 sigmoid(0.458)
1000000 loops, best of 1: 371 ns per loop

In [7]: %timeit -r 1 logistic.cdf(0.458)
10000 loops, best of 1: 72.2 ¬µs per loop

In [8]: %timeit -r 1 expit(0.458)
100000 loops, best of 1: 2.98 ¬µs per loop
``````

As expected `logistic.cdf` is (much) slower than `expit`. `expit` is still slower than the python `sigmoid` function when called with a single value because it is a universal function written in C ( http://docs.scipy.org/doc/numpy/reference/ufuncs.html ) and thus has a call overhead. This overhead is bigger than the computation speedup of `expit` given by its compiled nature when called with a single value. But it becomes negligible when it comes to big arrays:

``````In [9]: import numpy as np

In [10]: x = np.random.random(1000000)

In [11]: def sigmoid_array(x):
....:    return 1 / (1 + np.exp(-x))
....:
``````

(You"ll notice the tiny change from `math.exp` to `np.exp` (the first one does not support arrays, but is much faster if you have only one value to compute))

``````In [12]: %timeit -r 1 -n 100 sigmoid_array(x)
100 loops, best of 1: 34.3 ms per loop

In [13]: %timeit -r 1 -n 100 expit(x)
100 loops, best of 1: 31 ms per loop
``````

But when you really need performance, a common practice is to have a precomputed table of the the sigmoid function that hold in RAM, and trade some precision and memory for some speed (for example: http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/ )

Also, note that `expit` implementation is numerically stable since version 0.14.0: https://github.com/scipy/scipy/issues/3385