  Functors and their use in Python

functor | Python Methods and Functions

Let's understand Functors first:

# Python code for program illustrations
# no functors

class GodClass ( object ):

def DoSomething ( self , x):

x_first = x [ 0 ]

if type (x_first) is int :

return self .__ MergeSort (x)

if type (x_first) is float :

return self .__ HeapSort (x)

else :

return self .__ QuickSort (x)

def __ MergeSort ( self , a):

# & quot; Dummy MergeSort & quot;

print "Data is Merge sorted"

return a

def __ HeapSort ( self , b):

# & quot; Dummy HeapSort & quot;

print "Data is Heap sorted"

retu rn b

def __ QuickSort ( self , c):

# & quot; Dummy QuickSort & quot;

print "Data is Quick sorted"

return c

# This is where custom code should know about the conditions to call another strategy
# and make it tightly coupled code.

godObject = GodClass ()

print (godObject.DoSomethin g ([ 1 , 2 , 3 ]))

Output:

Data is Merge sorted [1, 2, 3]

There are some obvious gaps in this code
1. Internal implementation must be hidden from user code, ie abstraction must be maintained.
2. Each class must handle one responsibility / functionality.
2. The code is tightly coupled.

Let's solve the same problem using functors in Python

 # Python code for illustration programs # using functors   class Functor ( object ) : def __ init __ ( self , n = 10 ): self . n = n    # This construct allows you to call objects as functions in Python def __ call __ ( self , x): x_first = x [ 0 ] if type (x_first) is int : return self . __MergeSort (x) if type (x_first) is float :   return self . __HeapSort (x) else : return self .__ QuickSort (x)    def __ MergeSort ( self , a): # & quot; Dummy MergeSort & quot; print "Data is Merge sorted" return a def __ HeapSort ( self , b):   # & quot; Dummy HeapSort & quot;   print " Data is Heap sorted "   return b def __QuickSort ( self , c):   # & quot; Dummy QuickSort & quot; print "Data is Quick sorted" return c    # Now let's code a class that will call the above functions. # Without a functor, this class needs to know which particular function to call # depending on input type   # ## USER CODE class Caller ( object ): def __ init __ ( self ): self . sort = Functor ()     def Dosomething ( self , x): # It just calls the function here and doesn't need any care # what sorting is used. He only knows that the sorted output will be # the result of this call return self . sort (x)   Call = Caller ()   # Different input data print (Call.Dosomething ([ 5 , 4 , 6 ])) # Merge sort    print (Call.Dosomething ([ 2.23 , 3.45 , 5.65 ])) # heapsort print (Call.Dosomething ([ 'a' , 's' , ' b' , 'q' ])) # quicksort # Create word Vocab

Output:

Data is Merge sorted [5, 4, 6] Data is Heap sorted [2.23, 3.45, 5.65] Data is Quick sorted [' a', 's',' b', 'q']

The above design makes it easy to change the underlying strategy or implementation without breaking any custom code. Custom code can reliably use the above functor without knowing what's going on under the hood,
making the code separate, easily extensible and maintainable.

Now, along with functions in Python, you also understand the strategy pattern in Python, which requires separation between the class calling a specific function and the class where strategies are listed or selected.

This article courtesy of Ankit Singh . If you are as Python.Engineering and would like to contribute, you can also write an article using contribute.python.engineering or by posting an article contribute @ python.engineering. See my article appearing on the Python.Engineering homepage and help other geeks.

Functors and their use in Python: StackOverflow Questions

You can easily get the outputs of any layer by using: model.layers[index].output

For all layers use this:

from keras import backend as K

inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test, 1.]) for func in functors]
print layer_outs

Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0.

K.function creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input.

Now K.learning_phase() is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time.

So if you remove the dropout layer in your code you can simply use:

from keras import backend as K

inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp], [out]) for out in outputs]    # evaluation functions

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test]) for func in functors]
print layer_outs

Edit 2: More optimized

I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over.

Instead this is a much better way as you don"t need multiple functions but a single function giving you the list of all outputs:

from keras import backend as K

inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs )   # evaluation function

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs

Cases

• Common case: Almost always, you will want to use a list comprehension in python because it will be more obvious what you"re doing to novice programmers reading your code. (This does not apply to other languages, where other idioms may apply.) It will even be more obvious what you"re doing to python programmers, since list comprehensions are the de-facto standard in python for iteration; they are expected.
• Less-common case: However if you already have a function defined, it is often reasonable to use map, though it is considered "unpythonic". For example, map(sum, myLists) is more elegant/terse than [sum(x) for x in myLists]. You gain the elegance of not having to make up a dummy variable (e.g. sum(x) for x... or sum(_) for _... or sum(readableName) for readableName...) which you have to type twice, just to iterate. The same argument holds for filter and reduce and anything from the itertools module: if you already have a function handy, you could go ahead and do some functional programming. This gains readability in some situations, and loses it in others (e.g. novice programmers, multiple arguments)... but the readability of your code highly depends on your comments anyway.
• Almost never: You may want to use the map function as a pure abstract function while doing functional programming, where you"re mapping map, or currying map, or otherwise benefit from talking about map as a function. In Haskell for example, a functor interface called fmap generalizes mapping over any data structure. This is very uncommon in python because the python grammar compels you to use generator-style to talk about iteration; you can"t generalize it easily. (This is sometimes good and sometimes bad.) You can probably come up with rare python examples where map(f, *lists) is a reasonable thing to do. The closest example I can come up with would be sumEach = partial(map,sum), which is a one-liner that is very roughly equivalent to:

def sumEach(myLists):
return [sum(_) for _ in myLists]
• Just using a for-loop: You can also of course just use a for-loop. While not as elegant from a functional-programming viewpoint, sometimes non-local variables make code clearer in imperative programming languages such as python, because people are very used to reading code that way. For-loops are also, generally, the most efficient when you are merely doing any complex operation that is not building a list like list-comprehensions and map are optimized for (e.g. summing, or making a tree, etc.) -- at least efficient in terms of memory (not necessarily in terms of time, where I"d expect at worst a constant factor, barring some rare pathological garbage-collection hiccuping).

"Pythonism"

I dislike the word "pythonic" because I don"t find that pythonic is always elegant in my eyes. Nevertheless, map and filter and similar functions (like the very useful itertools module) are probably considered unpythonic in terms of style.

Laziness

In terms of efficiency, like most functional programming constructs, MAP CAN BE LAZY, and in fact is lazy in python. That means you can do this (in python3) and your computer will not run out of memory and lose all your unsaved data:

>>> map(str, range(10**100))
<map object at 0x2201d50>

Try doing that with a list comprehension:

>>> [str(n) for n in range(10**100)]
# DO NOT TRY THIS AT HOME OR YOU WILL BE SAD #

Do note that list comprehensions are also inherently lazy, but python has chosen to implement them as non-lazy. Nevertheless, python does support lazy list comprehensions in the form of generator expressions, as follows:

>>> (str(n) for n in range(10**100))
<generator object <genexpr> at 0xacbdef>

You can basically think of the [...] syntax as passing in a generator expression to the list constructor, like list(x for x in range(5)).

Brief contrived example

from operator import neg
print({x:x**2 for x in map(neg,range(5))})

print({x:x**2 for x in [-y for y in range(5)]})

print({x:x**2 for x in (-y for y in range(5))})

List comprehensions are non-lazy, so may require more memory (unless you use generator comprehensions). The square brackets [...] often make things obvious, especially when in a mess of parentheses. On the other hand, sometimes you end up being verbose like typing [x for x in.... As long as you keep your iterator variables short, list comprehensions are usually clearer if you don"t indent your code. But you could always indent your code.

print(
{x:x**2 for x in (-y for y in range(5))}
)

or break things up:

rangeNeg5 = (-y for y in range(5))
print(
{x:x**2 for x in rangeNeg5}
)

Efficiency comparison for python3

map is now lazy:

% python3 -mtimeit -s "xs=range(1000)" "f=lambda x:x" "z=map(f,xs)"
1000000 loops, best of 3: 0.336 usec per loop            ^^^^^^^^^

Therefore if you will not be using all your data, or do not know ahead of time how much data you need, map in python3 (and generator expressions in python2 or python3) will avoid calculating their values until the last moment necessary. Usually this will usually outweigh any overhead from using map. The downside is that this is very limited in python as opposed to most functional languages: you only get this benefit if you access your data left-to-right "in order", because python generator expressions can only be evaluated the order x, x, x, ....

However let"s say that we have a pre-made function f we"d like to map, and we ignore the laziness of map by immediately forcing evaluation with list(...). We get some very interesting results:

% python3 -mtimeit -s "xs=range(1000)" "f=lambda x:x" "z=list(map(f,xs))"
10000 loops, best of 3: 165/124/135 usec per loop        ^^^^^^^^^^^^^^^
for list(<map object>)

% python3 -mtimeit -s "xs=range(1000)" "f=lambda x:x" "z=[f(x) for x in xs]"
10000 loops, best of 3: 181/118/123 usec per loop        ^^^^^^^^^^^^^^^^^^
for list(<generator>), probably optimized

% python3 -mtimeit -s "xs=range(1000)" "f=lambda x:x" "z=list(f(x) for x in xs)"
1000 loops, best of 3: 215/150/150 usec per loop         ^^^^^^^^^^^^^^^^^^^^^^
for list(<generator>)

In results are in the form AAA/BBB/CCC where A was performed with on a circa-2010 Intel workstation with python 3.?.?, and B and C were performed with a circa-2013 AMD workstation with python 3.2.1, with extremely different hardware. The result seems to be that map and list comprehensions are comparable in performance, which is most strongly affected by other random factors. The only thing we can tell seems to be that, oddly, while we expect list comprehensions [...] to perform better than generator expressions (...), map is ALSO more efficient that generator expressions (again assuming that all values are evaluated/used).

It is important to realize that these tests assume a very simple function (the identity function); however this is fine because if the function were complicated, then performance overhead would be negligible compared to other factors in the program. (It may still be interesting to test with other simple things like f=lambda x:x+x)

If you"re skilled at reading python assembly, you can use the dis module to see if that"s actually what"s going on behind the scenes:

>>> listComp = compile("[f(x) for x in xs]", "listComp", "eval")
>>> dis.dis(listComp)
1           0 LOAD_CONST               0 (<code object <listcomp> at 0x2511a48, file "listComp", line 1>)
3 MAKE_FUNCTION            0
9 GET_ITER
10 CALL_FUNCTION            1
13 RETURN_VALUE
>>> listComp.co_consts
(<code object <listcomp> at 0x2511a48, file "listComp", line 1>,)
>>> dis.dis(listComp.co_consts)
1           0 BUILD_LIST               0
>>    6 FOR_ITER                18 (to 27)
9 STORE_FAST               1 (x)
18 CALL_FUNCTION            1
21 LIST_APPEND              2
24 JUMP_ABSOLUTE            6
>>   27 RETURN_VALUE

>>> listComp2 = compile("list(f(x) for x in xs)", "listComp2", "eval")
>>> dis.dis(listComp2)
3 LOAD_CONST               0 (<code object <genexpr> at 0x255bc68, file "listComp2", line 1>)
6 MAKE_FUNCTION            0
12 GET_ITER
13 CALL_FUNCTION            1
16 CALL_FUNCTION            1
19 RETURN_VALUE
>>> listComp2.co_consts
(<code object <genexpr> at 0x255bc68, file "listComp2", line 1>,)
>>> dis.dis(listComp2.co_consts)
>>    3 FOR_ITER                17 (to 23)
6 STORE_FAST               1 (x)
15 CALL_FUNCTION            1
18 YIELD_VALUE
19 POP_TOP
20 JUMP_ABSOLUTE            3
26 RETURN_VALUE

>>> evalledMap = compile("list(map(f,xs))", "evalledMap", "eval")
>>> dis.dis(evalledMap)