  # Python | Numpy numpy.ndarray .__ iadd __ ()

iadd | NumPy | Python Methods and Functions

Using the `numpy.ndarray.__iadd__() ` method we can add a specific value that is provided as a parameter in the ` ndarray .__ iadd __ () . The value will be appended to every item in the NumPy array. `

` Syntax: ndarray .__ iadd __ (\$ self, value, /) Return: self + = value Example # 1: In this example, we see that each the array element is appended with the value specified as a parameter in the ndarray .__ iadd __ () method. Remember that this method will work for every type of numeric value. # import important module into python import numpy as np   # make an array with NumPy gfg = np.array ([ 1.2 , 2.6 , 3 , 4.5 , 5 ])   # applying the ndarray method .__ iadd __ () print  (gfg .__ iadd __ ( 5 )) `

` ` Exit:

` [6.2 7.6 8. 9.5 10.] `

Example # 2:

 ` # import important module in python ` ` import ` ` numpy as np ` `   # make an array with NumPy `` gfg = np.array ([[ 1 , 2.2 , 3 , 4 , 5.01 ],   [ 6.1 , 5 , 4.8 , 3 , 2 ]])   # applying the ndarray method .__ iadd __ () print (gfg .__ iadd __ ( 3 )) `

Exit:

` [[4. 5.2 6. 7. 8.01] [9.1 8. 7.8 6. 5.]] `

## Python | Numpy numpy.ndarray .__ iadd __ (): StackOverflow Questions

I tested most suggested solutions with perfplot (a pet project of mine, essentially a wrapper around `timeit`), and found

``````import functools
import operator
functools.reduce(operator.iconcat, a, [])
``````

to be the fastest solution, both when many small lists and few long lists are concatenated. (`operator.iadd` is equally fast.)  Code to reproduce the plot:

``````import functools
import itertools
import numpy
import operator
import perfplot

def forfor(a):
return [item for sublist in a for item in sublist]

def sum_brackets(a):
return sum(a, [])

def functools_reduce(a):
return functools.reduce(operator.concat, a)

def functools_reduce_iconcat(a):
return functools.reduce(operator.iconcat, a, [])

def itertools_chain(a):
return list(itertools.chain.from_iterable(a))

def numpy_flat(a):
return list(numpy.array(a).flat)

def numpy_concatenate(a):
return list(numpy.concatenate(a))

perfplot.show(
setup=lambda n: [list(range(10))] * n,
# setup=lambda n: [list(range(n))] * 10,
kernels=[
forfor,
sum_brackets,
functools_reduce,
functools_reduce_iconcat,
itertools_chain,
numpy_flat,
numpy_concatenate,
],
n_range=[2 ** k for k in range(16)],
xlabel="num lists (of length 10)",
# xlabel="len lists (10 lists total)"
)
``````

## What is the difference between the list methods append and extend?

• `append` adds its argument as a single element to the end of a list. The length of the list itself will increase by one.
• `extend` iterates over its argument adding each element to the list, extending the list. The length of the list will increase by however many elements were in the iterable argument.

## `append`

The `list.append` method appends an object to the end of the list.

``````my_list.append(object)
``````

Whatever the object is, whether a number, a string, another list, or something else, it gets added onto the end of `my_list` as a single entry on the list.

``````>>> my_list
["foo", "bar"]
>>> my_list.append("baz")
>>> my_list
["foo", "bar", "baz"]
``````

So keep in mind that a list is an object. If you append another list onto a list, the first list will be a single object at the end of the list (which may not be what you want):

``````>>> another_list = [1, 2, 3]
>>> my_list.append(another_list)
>>> my_list
["foo", "bar", "baz", [1, 2, 3]]
#^^^^^^^^^--- single item at the end of the list.
``````

## `extend`

The `list.extend` method extends a list by appending elements from an iterable:

``````my_list.extend(iterable)
``````

So with extend, each element of the iterable gets appended onto the list. For example:

``````>>> my_list
["foo", "bar"]
>>> another_list = [1, 2, 3]
>>> my_list.extend(another_list)
>>> my_list
["foo", "bar", 1, 2, 3]
``````

Keep in mind that a string is an iterable, so if you extend a list with a string, you"ll append each character as you iterate over the string (which may not be what you want):

``````>>> my_list.extend("baz")
>>> my_list
["foo", "bar", 1, 2, 3, "b", "a", "z"]
``````

## Operator Overload, `__add__` (`+`) and `__iadd__` (`+=`)

Both `+` and `+=` operators are defined for `list`. They are semantically similar to extend.

`my_list + another_list` creates a third list in memory, so you can return the result of it, but it requires that the second iterable be a list.

`my_list += another_list` modifies the list in-place (it is the in-place operator, and lists are mutable objects, as we"ve seen) so it does not create a new list. It also works like extend, in that the second iterable can be any kind of iterable.

Don"t get confused - `my_list = my_list + another_list` is not equivalent to `+=` - it gives you a brand new list assigned to my_list.

## Time Complexity

Append has (amortized) constant time complexity, O(1).

Extend has time complexity, O(k).

Iterating through the multiple calls to `append` adds to the complexity, making it equivalent to that of extend, and since extend"s iteration is implemented in C, it will always be faster if you intend to append successive items from an iterable onto a list.

Regarding "amortized" - from the list object implementation source:

``````    /* This over-allocates proportional to the list size, making room
* for additional growth.  The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
``````

This means that we get the benefits of a larger than needed memory reallocation up front, but we may pay for it on the next marginal reallocation with an even larger one. Total time for all appends is linear at O(n), and that time allocated per append, becomes O(1).

## Performance

You may wonder what is more performant, since append can be used to achieve the same outcome as extend. The following functions do the same thing:

``````def append(alist, iterable):
for item in iterable:
alist.append(item)

def extend(alist, iterable):
alist.extend(iterable)
``````

So let"s time them:

``````import timeit

>>> min(timeit.repeat(lambda: append([], "abcdefghijklmnopqrstuvwxyz")))
2.867846965789795
>>> min(timeit.repeat(lambda: extend([], "abcdefghijklmnopqrstuvwxyz")))
0.8060121536254883
``````

### Addressing a comment on timings

A commenter said:

Perfect answer, I just miss the timing of comparing adding only one element

Do the semantically correct thing. If you want to append all elements in an iterable, use `extend`. If you"re just adding one element, use `append`.

Ok, so let"s create an experiment to see how this works out in time:

``````def append_one(a_list, element):
a_list.append(element)

def extend_one(a_list, element):
"""creating a new list is semantically the most direct
way to create an iterable to give to extend"""
a_list.extend([element])

import timeit
``````

And we see that going out of our way to create an iterable just to use extend is a (minor) waste of time:

``````>>> min(timeit.repeat(lambda: append_one([], 0)))
0.2082819009956438
>>> min(timeit.repeat(lambda: extend_one([], 0)))
0.2397019260097295
``````

We learn from this that there"s nothing gained from using `extend` when we have only one element to append.

Also, these timings are not that important. I am just showing them to make the point that, in Python, doing the semantically correct thing is doing things the Right Way‚Ñ¢.

It"s conceivable that you might test timings on two comparable operations and get an ambiguous or inverse result. Just focus on doing the semantically correct thing.

## Conclusion

We see that `extend` is semantically clearer, and that it can run much faster than `append`, when you intend to append each element in an iterable to a list.

If you only have a single element (not in an iterable) to add to the list, use `append`.

### How do I concatenate two lists in Python?

As of 3.9, these are the most popular stdlib methods for concatenating two (or more) lists in python. Footnotes

1. This is a slick solution because of its succinctness. But `sum` performs concatenation in a pairwise fashion, which means this is a quadratic operation as memory has to be allocated for each step. DO NOT USE if your lists are large.

2. See `chain` and `chain.from_iterable` from the docs. You will need to `import itertools` first. Concatenation is linear in memory, so this is the best in terms of performance and version compatibility. `chain.from_iterable` was introduced in 2.6.

3. This method uses Additional Unpacking Generalizations (PEP 448), but cannot generalize to N lists unless you manually unpack each one yourself.

4. `a += b` and `a.extend(b)` are more or less equivalent for all practical purposes. `+=` when called on a list will internally call `list.__iadd__`, which extends the first list by the second.

# Performance

2-List Concatenation1 There"s not much difference between these methods but that makes sense given they all have the same order of complexity (linear). There"s no particular reason to prefer one over the other except as a matter of style.

N-List Concatenation Plots have been generated using the perfplot module. Code, for your reference.

1. The `iadd` (`+=`) and `extend` methods operate in-place, so a copy has to be generated each time before testing. To keep things fair, all methods have a pre-copy step for the left-hand list which can be ignored.

• DO NOT USE THE DUNDER METHOD `list.__add__` directly in any way, shape or form. In fact, stay clear of dunder methods, and use the operators and `operator` functions like they were designed for. Python has careful semantics baked into these which are more complicated than just calling the dunder directly. Here is an example. So, to summarise, `a.__add__(b)` => BAD; `a + b` => GOOD.

• Some answers here offer `reduce(operator.add, [a, b])` for pairwise concatenation -- this is the same as `sum([a, b], [])` only more wordy.

• Any method that uses `set` will drop duplicates and lose ordering. Use with caution.

• `for i in b: a.append(i)` is more wordy, and slower than `a.extend(b)`, which is single function call and more idiomatic. `append` is slower because of the semantics with which memory is allocated and grown for lists. See here for a similar discussion.

• `heapq.merge` will work, but its use case is for merging sorted lists in linear time. Using it in any other situation is an anti-pattern.

• `yield`ing list elements from a function is an acceptable method, but `chain` does this faster and better (it has a code path in C, so it is fast).

• `operator.add(a, b)` is an acceptable functional equivalent to `a + b`. It"s use cases are mainly for dynamic method dispatch. Otherwise, prefer `a + b` which is shorter and more readable, in my opinion. YMMV.

The difference is that one modifies the data-structure itself (in-place operation) `b += 1` while the other just reassigns the variable `a = a + 1`.

Just for completeness:

`x += y` is not always doing an in-place operation, there are (at least) three exceptions:

• If `x` doesn"t implement an `__iadd__` method then the `x += y` statement is just a shorthand for `x = x + y`. This would be the case if `x` was something like an `int`.

• If `__iadd__` returns `NotImplemented`, Python falls back to `x = x + y`.

• The `__iadd__` method could theoretically be implemented to not work in place. It"d be really weird to do that, though.

As it happens your `b`s are `numpy.ndarray`s which implements `__iadd__` and return itself so your second loop modifies the original array in-place.

You can read more on this in the Python documentation of "Emulating Numeric Types".

These [`__i*__`] methods are called to implement the augmented arithmetic assignments (`+=`, `-=`, `*=`, `@=`, `/=`, `//=`, `%=`, `**=`, `<<=`, `>>=`, `&=`, `^=`, `|=`). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an `__iadd__()` method, `x += y` is equivalent to `x = x.__iadd__(y)` . Otherwise, `x.__add__(y)` and `y.__radd__(x)` are considered, as with the evaluation of `x + y`. In certain situations, augmented assignment can result in unexpected errors (see Why does `a_tuple[i] += ["item"]` raise an exception when the addition works?), but this behavior is in fact part of the data model.

This depends entirely on the object `i`.

`+=` calls the `__iadd__` method (if it exists -- falling back on `__add__` if it doesn"t exist) whereas `+` calls the `__add__` method1 or the `__radd__` method in a few cases2.

From an API perspective, `__iadd__` is supposed to be used for modifying mutable objects in place (returning the object which was mutated) whereas `__add__` should return a new instance of something. For immutable objects, both methods return a new instance, but `__iadd__` will put the new instance in the current namespace with the same name that the old instance had. This is why

``````i = 1
i += 1
``````

seems to increment `i`. In reality, you get a new integer and assign it "on top of" `i` -- losing one reference to the old integer. In this case, `i += 1` is exactly the same as `i = i + 1`. But, with most mutable objects, it"s a different story:

As a concrete example:

``````a = [1, 2, 3]
b = a
b += [1, 2, 3]
print a  #[1, 2, 3, 1, 2, 3]
print b  #[1, 2, 3, 1, 2, 3]
``````

compared to:

``````a = [1, 2, 3]
b = a
b = b + [1, 2, 3]
print a #[1, 2, 3]
print b #[1, 2, 3, 1, 2, 3]
``````

notice how in the first example, since `b` and `a` reference the same object, when I use `+=` on `b`, it actually changes `b` (and `a` sees that change too -- After all, it"s referencing the same list). In the second case however, when I do `b = b + [1, 2, 3]`, this takes the list that `b` is referencing and concatenates it with a new list `[1, 2, 3]`. It then stores the concatenated list in the current namespace as `b` -- With no regard for what `b` was the line before.

1In the expression `x + y`, if `x.__add__` isn"t implemented or if `x.__add__(y)` returns `NotImplemented` and `x` and `y` have different types, then `x + y` tries to call `y.__radd__(x)`. So, in the case where you have

`foo_instance += bar_instance`

if `Foo` doesn"t implement `__add__` or `__iadd__` then the result here is the same as

`foo_instance = bar_instance.__radd__(bar_instance, foo_instance)`

2In the expression `foo_instance + bar_instance`, `bar_instance.__radd__` will be tried before `foo_instance.__add__` if the type of `bar_instance` is a subclass of the type of `foo_instance` (e.g. `issubclass(Bar, Foo)`). The rationale for this is that `Bar` is in some sense a "higher-level" object than `Foo` so `Bar` should get the option of overriding `Foo`"s behavior.

From the documentation:

The `@` (at) operator is intended to be used for matrix multiplication. No builtin Python types implement this operator.

The `@` operator was introduced in Python 3.5. `@=` is matrix multiplication followed by assignment, as you would expect. They map to `__matmul__`, `__rmatmul__` or `__imatmul__` similar to how `+` and `+=` map to `__add__`, `__radd__` or `__iadd__`.

The operator and the rationale behind it are discussed in detail in PEP 465.

In Python, += is sugar coating for the `__iadd__` special method, or `__add__` or `__radd__` if `__iadd__` isn"t present. The `__iadd__` method of a class can do anything it wants. The list object implements it and uses it to iterate over an iterable object appending each element to itself in the same way that the list"s extend method does.

Here"s a simple custom class that implements the `__iadd__` special method. You initialize the object with an int, then can use the += operator to add a number. I"ve added a print statement in `__iadd__` to show that it gets called. Also, `__iadd__` is expected to return an object, so I returned the addition of itself plus the other number which makes sense in this case.

``````>>> class Adder(object):
def __init__(self, num=0):
self.num = num

self.num = self.num + other
return self.num

>>> a += 3
>>> a
5
``````

Hope this helps.

The general answer is that `+=` tries to call the `__iadd__` special method, and if that isn"t available it tries to use `__add__` instead. So the issue is with the difference between these special methods.

The `__iadd__` special method is for an in-place addition, that is it mutates the object that it acts on. The `__add__` special method returns a new object and is also used for the standard `+` operator.

So when the `+=` operator is used on an object which has an `__iadd__` defined the object is modified in place. Otherwise it will instead try to use the plain `__add__` and return a new object.

That is why for mutable types like lists `+=` changes the object"s value, whereas for immutable types like tuples, strings and integers a new object is returned instead (`a += b` becomes equivalent to `a = a + b`).

For types that support both `__iadd__` and `__add__` you therefore have to be careful which one you use. `a += b` will call `__iadd__` and mutate `a`, whereas `a = a + b` will create a new object and assign it to `a`. They are not the same operation!

``````>>> a1 = a2 = [1, 2]
>>> b1 = b2 = [1, 2]
>>> a1 +=           # Uses __iadd__, modifies a1 in-place
>>> b1 = b1 +       # Uses __add__, creates new list, assigns it to b1
>>> a2
[1, 2, 3]              # a1 and a2 are still the same list
>>> b2
[1, 2]                 # whereas only b1 was changed
``````

For immutable types (where you don"t have an `__iadd__`) `a += b` and `a = a + b` are equivalent. This is what lets you use `+=` on immutable types, which might seem a strange design decision until you consider that otherwise you couldn"t use `+=` on immutable types like numbers!