# Finding mean, median, mode in Python without libraries

Counters | find | mean | median | Python Methods and Functions

## Mean median mode in Python without libraries

Mean, median and mode are fundamental topics of statistics. You can easily calculate them in Python, with and without the use of external libraries.

These three are the main measures of central tendency. The central trend allows us to know the "normal" or "average" values ​​of a data set. If you're just starting out with data science, this is the tutorial for you. The Mean, Median, and Mode are techniques that are often used in Machine Learning, so it is important to understand the concept behind them.

This article shows you how to use Python to calculate mean, median, and mode without using external libraries.

### Mean in Python

Mean: The mean is the average of all numbers and is sometimes called the arithmetic mean. This code computes the mean or average of a list of numbers:

```# Python program to print
# mean of elements

# list of elements to calculate mean
n_num = [1, 2, 3, 4, 5]
n = len(n_num)

get_sum = sum(n_num)
mean = get_sum / n

print("Mean / Average is: " + str(mean))

```

### Output:

```Mean / Average is: 3.0
```

We define a list of numbers and calculate the length of the list. We then use the sum () function to get the sum of all the elements in a list. Finally, we divide the total by the number of items in the list and output the result to get the mean of a list.

### Median in Python

Median: The median is the middle number in a group of numbers. The median is the mean value of a ranked dataset. It is used - again - to provide a “typical” value for a given population.

In programming, we can define the median as the value that separates a sequence into two parts - the lower half and the upper half. To calculate the median, we first need to sort the dataset. We could do this with sorting algorithms or using the built-in sorted() function. The second step is to determine whether the length of the dataset is odd or even. This code calculates the median of a list of numbers:

```# Python program to print
# median of elements

# list of elements to calculate median
n_num = [1, 2, 3, 4, 5]
n = len(n_num)
n_num.sort()

if n % 2 == 0:
median1 = n_num[n//2]
median2 = n_num[n//2 - 1]
median = (median1 + median2)/2
else:
median = n_num[n//2]
print("Median is: " + str(median))

```

### Output:

```Median is: 3
```

We define a list of numbers and calculate the length of the list. To find a median, we first sort the list in ascending order using the sort () function.

Now let's check if the number is even or odd by checking their remainders. If the number is even, we find 2 intermediate elements in a list and we get their average to print it. But if the number is odd, we find the central element in a list and print it.

## Mean median mode in python

### Mode in Python

Mode: The mode is the number that occurs most often within a set of numbers. Mode is the most common value in the dataset. We can think of this as a “popular” school group that can represent the standard for all students.

An example of a mode would be daily sales at a tech store. The mode of this dataset will be the best selling product for a given day. This code calculates the mode of a list containing numbers:

```# Python program to print
# mode of elements
from collections import Counter

# list of elements to calculate mode
n_num = [1, 2, 3, 4, 5, 5]
n = len(n_num)

data = Counter(n_num)
get_mode = dict(data)
mode = [k for k, v in get_mode.items() if v == max(list(data.values()))]

if len(mode) == n:
get_mode = "No mode found"
else:
get_mode = "Mode is / are: " + ', '.join(map(str, mode))

print(get_mode)

```

### Output:

```Mode is / are: 5
```

We will import Counter from the collection library which is a built-in module in Python 2 and 3. This module will help us to count duplicate items in a list.

We define a list of numbers and calculate the length of the list. So we call Counter (a dict subclass) which helps count hashable objects and then convert it to a dict object. We then start a list with a For loop to compare all dict values ​​(number of elements) with the maximum of all dict values ​​(number of multiple elements) and return all elements equal to the maximum number. If the returned items are equal to the total number of items in a list, we display "No modes", otherwise we display the returned modes.

```# The list for which you need to find
# the Mode
y= [11, 8, 8, 3, 4, 4, 5, 6, 6, 6, 7, 8]

# First you sort it
# You will get numbers arranged from 3 to
# 11 in asc order
y.sort()

# Now open an empty list.
# What you are going to do is to count
# the occurrence of each number and append
L1=[]

# You can iterate through the sorted list
# of numbers in y,
# counting the occurrence of each number,
# using the following code

i = 0
while i < len(y) :
L1.append(y.count(y[i]))
i += 1

# your L1 will be [1, 2, 2, 1, 3, 3, 3, 1, 3, 3, 3, 1],
# the occurrences for each number in sorted y

# now you can create a custom dictionary d1 for k : V
# where k = your values in sorted y
# and v = the occurrences of each value in y

# the Code is as follows

d1 = dict(zip(y, L1))

# your d1 will be {3: 1, 4: 2, 5: 1, 6: 3, 7: 1, 8: 3, 11: 1}
# now what you need to do is to filter
# the k values with the highest v values.
# do this with the following code

d2={k for (k,v) in d1.items() if v == max(L1) }

print("Mode(s) is/are :" + str(d2))

```

### Output:

```Mode(s) is/are :{8, 6}
```

## How to find mode in Python without inbuilt function

We have successfully calculated the mean, median, and dataset mode, but you might be thinking, "Will I use these algorithms every time I want to get the mean, median, and dataset mode?" Answer: you can, but of course you won't. It was just to show you how the algorithm works behind the scenes when it detects any of them.

For any project, this can be achieved by simply importing the built-in statistics library in Python 3 and using the mean (), median () and mode () built-in functions. In addition, there are other external libraries that can help you achieve the same results in just 1 line of code, since the code is pre-written in these libraries.

## Finding the index of an item in a list

Given a list `["foo", "bar", "baz"]` and an item in the list `"bar"`, how do I get its index (`1`) in Python?

## Find current directory and file"s directory

In Python, what commands can I use to find:

1. the current directory (where I was in the terminal when I ran the Python script), and
2. where the file I am executing is?

## How to find if directory exists in Python

In the `os` module in Python, is there a way to find if a directory exists, something like:

``````>>> os.direxists(os.path.join(os.getcwd()), "new_folder")) # in pseudocode
True/False
``````

## How do I find the location of my Python site-packages directory?

### Question by Daryl Spitzer

How do I find the location of my site-packages directory?

## Find all files in a directory with extension .txt in Python

How can I find all the files in a directory having the extension `.txt` in python?

## Find which version of package is installed with pip

Using pip, is it possible to figure out which version of a package is currently installed?

I know about `pip install XYZ --upgrade` but I am wondering if there is anything like `pip info XYZ`. If not what would be the best way to tell what version I am currently using.

## error: Unable to find vcvarsall.bat

I tried to install the Python package dulwich:

``````pip install dulwich
``````

But I get a cryptic error message:

``````error: Unable to find vcvarsall.bat
``````

The same happens if I try installing the package manually:

``````> python setup.py install
running build_ext
building "dulwich._objects" extension
error: Unable to find vcvarsall.bat
``````

## How to use glob() to find files recursively?

This is what I have:

``````glob(os.path.join("src","*.c"))
``````

but I want to search the subfolders of src. Something like this would work:

``````glob(os.path.join("src","*.c"))
glob(os.path.join("src","*","*.c"))
glob(os.path.join("src","*","*","*.c"))
glob(os.path.join("src","*","*","*","*.c"))
``````

But this is obviously limited and clunky.

## Python: Find in list

I have come across this:

``````item = someSortOfSelection()
if item in myList:
doMySpecialFunction(item)
``````

but sometimes it does not work with all my items, as if they weren"t recognized in the list (when it"s a list of string).

Is this the most "pythonic" way of finding an item in a list: `if x in l:`?

## How to find out the number of CPUs using python

I want to know the number of CPUs on the local machine using Python. The result should be `user/real` as output by `time(1)` when called with an optimally scaling userspace-only program.

## How to iterate over rows in a DataFrame in Pandas?

Iteration in Pandas is an anti-pattern and is something you should only do when you have exhausted every other option. You should not use any function with "`iter`" in its name for more than a few thousand rows or you will have to get used to a lot of waiting.

Do you want to print a DataFrame? Use `DataFrame.to_string()`.

Do you want to compute something? In that case, search for methods in this order (list modified from here):

1. Vectorization
2. Cython routines
3. List Comprehensions (vanilla `for` loop)
4. `DataFrame.apply()`: i) ¬†Reductions that can be performed in Cython, ii) Iteration in Python space
5. `DataFrame.itertuples()` and `iteritems()`
6. `DataFrame.iterrows()`

`iterrows` and `itertuples` (both receiving many votes in answers to this question) should be used in very rare circumstances, such as generating row objects/nametuples for sequential processing, which is really the only thing these functions are useful for.

Appeal to Authority

The documentation page on iteration has a huge red warning box that says:

Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed [...].

* It"s actually a little more complicated than "don"t". `df.iterrows()` is the correct answer to this question, but "vectorize your ops" is the better one. I will concede that there are circumstances where iteration cannot be avoided (for example, some operations where the result depends on the value computed for the previous row). However, it takes some familiarity with the library to know when. If you"re not sure whether you need an iterative solution, you probably don"t. PS: To know more about my rationale for writing this answer, skip to the very bottom.

## Faster than Looping: Vectorization, Cython

A good number of basic operations and computations are "vectorised" by pandas (either through NumPy, or through Cythonized functions). This includes arithmetic, comparisons, (most) reductions, reshaping (such as pivoting), joins, and groupby operations. Look through the documentation on Essential Basic Functionality to find a suitable vectorised method for your problem.

If none exists, feel free to write your own using custom Cython extensions.

## Next Best Thing: List Comprehensions*

List comprehensions should be your next port of call if 1) there is no vectorized solution available, 2) performance is important, but not important enough to go through the hassle of cythonizing your code, and 3) you"re trying to perform elementwise transformation on your code. There is a good amount of evidence to suggest that list comprehensions are sufficiently fast (and even sometimes faster) for many common Pandas tasks.

The formula is simple,

``````# Iterating over one column - `f` is some function that processes your data
result = [f(x) for x in df["col"]]
# Iterating over two columns, use `zip`
result = [f(x, y) for x, y in zip(df["col1"], df["col2"])]
# Iterating over multiple columns - same data type
result = [f(row[0], ..., row[n]) for row in df[["col1", ...,"coln"]].to_numpy()]
# Iterating over multiple columns - differing data type
result = [f(row[0], ..., row[n]) for row in zip(df["col1"], ..., df["coln"])]
``````

If you can encapsulate your business logic into a function, you can use a list comprehension that calls it. You can make arbitrarily complex things work through the simplicity and speed of raw Python code.

Caveats

List comprehensions assume that your data is easy to work with - what that means is your data types are consistent and you don"t have NaNs, but this cannot always be guaranteed.

1. The first one is more obvious, but when dealing with NaNs, prefer in-built pandas methods if they exist (because they have much better corner-case handling logic), or ensure your business logic includes appropriate NaN handling logic.
2. When dealing with mixed data types you should iterate over `zip(df["A"], df["B"], ...)` instead of `df[["A", "B"]].to_numpy()` as the latter implicitly upcasts data to the most common type. As an example if A is numeric and B is string, `to_numpy()` will cast the entire array to string, which may not be what you want. Fortunately `zip`ping your columns together is the most straightforward workaround to this.

*Your mileage may vary for the reasons outlined in the Caveats section above.

## An Obvious Example

Let"s demonstrate the difference with a simple example of adding two pandas columns `A + B`. This is a vectorizable operaton, so it will be easy to contrast the performance of the methods discussed above.

Benchmarking code, for your reference. The line at the bottom measures a function written in numpandas, a style of Pandas that mixes heavily with NumPy to squeeze out maximum performance. Writing numpandas code should be avoided unless you know what you"re doing. Stick to the API where you can (i.e., prefer `vec` over `vec_numpy`).

I should mention, however, that it isn"t always this cut and dry. Sometimes the answer to "what is the best method for an operation" is "it depends on your data". My advice is to test out different approaches on your data before settling on one.

* Pandas string methods are "vectorized" in the sense that they are specified on the series but operate on each element. The underlying mechanisms are still iterative, because string operations are inherently hard to vectorize.

## Why I Wrote this Answer

A common trend I notice from new users is to ask questions of the form "How can I iterate over my df to do X?". Showing code that calls `iterrows()` while doing something inside a `for` loop. Here is why. A new user to the library who has not been introduced to the concept of vectorization will likely envision the code that solves their problem as iterating over their data to do something. Not knowing how to iterate over a DataFrame, the first thing they do is Google it and end up here, at this question. They then see the accepted answer telling them how to, and they close their eyes and run this code without ever first questioning if iteration is not the right thing to do.

The aim of this answer is to help new users understand that iteration is not necessarily the solution to every problem, and that better, faster and more idiomatic solutions could exist, and that it is worth investing time in exploring them. I"m not trying to start a war of iteration vs. vectorization, but I want new users to be informed when developing solutions to their problems with this library.

# In Python, what is the purpose of `__slots__` and what are the cases one should avoid this?

## TLDR:

The special attribute `__slots__` allows you to explicitly state which instance attributes you expect your object instances to have, with the expected results:

1. faster attribute access.
2. space savings in memory.

The space savings is from

1. Storing value references in slots instead of `__dict__`.
2. Denying `__dict__` and `__weakref__` creation if parent classes deny them and you declare `__slots__`.

### Quick Caveats

Small caveat, you should only declare a particular slot one time in an inheritance tree. For example:

``````class Base:
__slots__ = "foo", "bar"

class Right(Base):
__slots__ = "baz",

class Wrong(Base):
__slots__ = "foo", "bar", "baz"        # redundant foo and bar
``````

Python doesn"t object when you get this wrong (it probably should), problems might not otherwise manifest, but your objects will take up more space than they otherwise should. Python 3.8:

``````>>> from sys import getsizeof
>>> getsizeof(Right()), getsizeof(Wrong())
(56, 72)
``````

This is because the Base"s slot descriptor has a slot separate from the Wrong"s. This shouldn"t usually come up, but it could:

``````>>> w = Wrong()
>>> w.foo = "foo"
>>> Base.foo.__get__(w)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: foo
>>> Wrong.foo.__get__(w)
"foo"
``````

The biggest caveat is for multiple inheritance - multiple "parent classes with nonempty slots" cannot be combined.

To accommodate this restriction, follow best practices: Factor out all but one or all parents" abstraction which their concrete class respectively and your new concrete class collectively will inherit from - giving the abstraction(s) empty slots (just like abstract base classes in the standard library).

See section on multiple inheritance below for an example.

### Requirements:

• To have attributes named in `__slots__` to actually be stored in slots instead of a `__dict__`, a class must inherit from `object` (automatic in Python 3, but must be explicit in Python 2).

• To prevent the creation of a `__dict__`, you must inherit from `object` and all classes in the inheritance must declare `__slots__` and none of them can have a `"__dict__"` entry.

There are a lot of details if you wish to keep reading.

## Why use `__slots__`: Faster attribute access.

The creator of Python, Guido van Rossum, states that he actually created `__slots__` for faster attribute access.

It is trivial to demonstrate measurably significant faster access:

``````import timeit

class Foo(object): __slots__ = "foo",

class Bar(object): pass

slotted = Foo()
not_slotted = Bar()

def get_set_delete_fn(obj):
def get_set_delete():
obj.foo = "foo"
obj.foo
del obj.foo
return get_set_delete
``````

and

``````>>> min(timeit.repeat(get_set_delete_fn(slotted)))
0.2846834529991611
>>> min(timeit.repeat(get_set_delete_fn(not_slotted)))
0.3664822799983085
``````

The slotted access is almost 30% faster in Python 3.5 on Ubuntu.

``````>>> 0.3664822799983085 / 0.2846834529991611
1.2873325658284342
``````

In Python 2 on Windows I have measured it about 15% faster.

## Why use `__slots__`: Memory Savings

Another purpose of `__slots__` is to reduce the space in memory that each object instance takes up.

The space saved over using `__dict__` can be significant.

SQLAlchemy attributes a lot of memory savings to `__slots__`.

To verify this, using the Anaconda distribution of Python 2.7 on Ubuntu Linux, with `guppy.hpy` (aka heapy) and `sys.getsizeof`, the size of a class instance without `__slots__` declared, and nothing else, is 64 bytes. That does not include the `__dict__`. Thank you Python for lazy evaluation again, the `__dict__` is apparently not called into existence until it is referenced, but classes without data are usually useless. When called into existence, the `__dict__` attribute is a minimum of 280 bytes additionally.

In contrast, a class instance with `__slots__` declared to be `()` (no data) is only 16 bytes, and 56 total bytes with one item in slots, 64 with two.

For 64 bit Python, I illustrate the memory consumption in bytes in Python 2.7 and 3.6, for `__slots__` and `__dict__` (no slots defined) for each point where the dict grows in 3.6 (except for 0, 1, and 2 attributes):

``````       Python 2.7             Python 3.6
attrs  __slots__  __dict__*   __slots__  __dict__* | *(no slots defined)
none   16         56 + 272‚Ä†   16         56 + 112‚Ä† | ‚Ä†if __dict__ referenced
one    48         56 + 272    48         56 + 112
two    56         56 + 272    56         56 + 112
six    88         56 + 1040   88         56 + 152
11     128        56 + 1040   128        56 + 240
22     216        56 + 3344   216        56 + 408
43     384        56 + 3344   384        56 + 752
``````

So, in spite of smaller dicts in Python 3, we see how nicely `__slots__` scale for instances to save us memory, and that is a major reason you would want to use `__slots__`.

Just for completeness of my notes, note that there is a one-time cost per slot in the class"s namespace of 64 bytes in Python 2, and 72 bytes in Python 3, because slots use data descriptors like properties, called "members".

``````>>> Foo.foo
<member "foo" of "Foo" objects>
>>> type(Foo.foo)
<class "member_descriptor">
>>> getsizeof(Foo.foo)
72
``````

## Demonstration of `__slots__`:

To deny the creation of a `__dict__`, you must subclass `object`. Everything subclasses `object` in Python 3, but in Python 2 you had to be explicit:

``````class Base(object):
__slots__ = ()
``````

now:

``````>>> b = Base()
>>> b.a = "a"
Traceback (most recent call last):
File "<pyshell#38>", line 1, in <module>
b.a = "a"
AttributeError: "Base" object has no attribute "a"
``````

Or subclass another class that defines `__slots__`

``````class Child(Base):
__slots__ = ("a",)
``````

and now:

``````c = Child()
c.a = "a"
``````

but:

``````>>> c.b = "b"
Traceback (most recent call last):
File "<pyshell#42>", line 1, in <module>
c.b = "b"
AttributeError: "Child" object has no attribute "b"
``````

To allow `__dict__` creation while subclassing slotted objects, just add `"__dict__"` to the `__slots__` (note that slots are ordered, and you shouldn"t repeat slots that are already in parent classes):

``````class SlottedWithDict(Child):
__slots__ = ("__dict__", "b")

swd = SlottedWithDict()
swd.a = "a"
swd.b = "b"
swd.c = "c"
``````

and

``````>>> swd.__dict__
{"c": "c"}
``````

Or you don"t even need to declare `__slots__` in your subclass, and you will still use slots from the parents, but not restrict the creation of a `__dict__`:

``````class NoSlots(Child): pass
ns = NoSlots()
ns.a = "a"
ns.b = "b"
``````

And:

``````>>> ns.__dict__
{"b": "b"}
``````

However, `__slots__` may cause problems for multiple inheritance:

``````class BaseA(object):
__slots__ = ("a",)

class BaseB(object):
__slots__ = ("b",)
``````

Because creating a child class from parents with both non-empty slots fails:

``````>>> class Child(BaseA, BaseB): __slots__ = ()
Traceback (most recent call last):
File "<pyshell#68>", line 1, in <module>
class Child(BaseA, BaseB): __slots__ = ()
TypeError: Error when calling the metaclass bases
multiple bases have instance lay-out conflict
``````

If you run into this problem, You could just remove `__slots__` from the parents, or if you have control of the parents, give them empty slots, or refactor to abstractions:

``````from abc import ABC

class AbstractA(ABC):
__slots__ = ()

class BaseA(AbstractA):
__slots__ = ("a",)

class AbstractB(ABC):
__slots__ = ()

class BaseB(AbstractB):
__slots__ = ("b",)

class Child(AbstractA, AbstractB):
__slots__ = ("a", "b")

c = Child() # no problem!
``````

### Add `"__dict__"` to `__slots__` to get dynamic assignment:

``````class Foo(object):
__slots__ = "bar", "baz", "__dict__"
``````

and now:

``````>>> foo = Foo()
>>> foo.boink = "boink"
``````

So with `"__dict__"` in slots we lose some of the size benefits with the upside of having dynamic assignment and still having slots for the names we do expect.

When you inherit from an object that isn"t slotted, you get the same sort of semantics when you use `__slots__` - names that are in `__slots__` point to slotted values, while any other values are put in the instance"s `__dict__`.

Avoiding `__slots__` because you want to be able to add attributes on the fly is actually not a good reason - just add `"__dict__"` to your `__slots__` if this is required.

You can similarly add `__weakref__` to `__slots__` explicitly if you need that feature.

### Set to empty tuple when subclassing a namedtuple:

The namedtuple builtin make immutable instances that are very lightweight (essentially, the size of tuples) but to get the benefits, you need to do it yourself if you subclass them:

``````from collections import namedtuple
class MyNT(namedtuple("MyNT", "bar baz")):
"""MyNT is an immutable and lightweight object"""
__slots__ = ()
``````

usage:

``````>>> nt = MyNT("bar", "baz")
>>> nt.bar
"bar"
>>> nt.baz
"baz"
``````

And trying to assign an unexpected attribute raises an `AttributeError` because we have prevented the creation of `__dict__`:

``````>>> nt.quux = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: "MyNT" object has no attribute "quux"
``````

You can allow `__dict__` creation by leaving off `__slots__ = ()`, but you can"t use non-empty `__slots__` with subtypes of tuple.

## Biggest Caveat: Multiple inheritance

Even when non-empty slots are the same for multiple parents, they cannot be used together:

``````class Foo(object):
__slots__ = "foo", "bar"
class Bar(object):
__slots__ = "foo", "bar" # alas, would work if empty, i.e. ()

>>> class Baz(Foo, Bar): pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Error when calling the metaclass bases
multiple bases have instance lay-out conflict
``````

Using an empty `__slots__` in the parent seems to provide the most flexibility, allowing the child to choose to prevent or allow (by adding `"__dict__"` to get dynamic assignment, see section above) the creation of a `__dict__`:

``````class Foo(object): __slots__ = ()
class Bar(object): __slots__ = ()
class Baz(Foo, Bar): __slots__ = ("foo", "bar")
b = Baz()
b.foo, b.bar = "foo", "bar"
``````

You don"t have to have slots - so if you add them, and remove them later, it shouldn"t cause any problems.

Going out on a limb here: If you"re composing mixins or using abstract base classes, which aren"t intended to be instantiated, an empty `__slots__` in those parents seems to be the best way to go in terms of flexibility for subclassers.

To demonstrate, first, let"s create a class with code we"d like to use under multiple inheritance

``````class AbstractBase:
__slots__ = ()
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return f"{type(self).__name__}({repr(self.a)}, {repr(self.b)})"
``````

We could use the above directly by inheriting and declaring the expected slots:

``````class Foo(AbstractBase):
__slots__ = "a", "b"
``````

But we don"t care about that, that"s trivial single inheritance, we need another class we might also inherit from, maybe with a noisy attribute:

``````class AbstractBaseC:
__slots__ = ()
@property
def c(self):
print("getting c!")
return self._c
@c.setter
def c(self, arg):
print("setting c!")
self._c = arg
``````

Now if both bases had nonempty slots, we couldn"t do the below. (In fact, if we wanted, we could have given `AbstractBase` nonempty slots a and b, and left them out of the below declaration - leaving them in would be wrong):

``````class Concretion(AbstractBase, AbstractBaseC):
__slots__ = "a b _c".split()
``````

And now we have functionality from both via multiple inheritance, and can still deny `__dict__` and `__weakref__` instantiation:

``````>>> c = Concretion("a", "b")
>>> c.c = c
setting c!
>>> c.c
getting c!
Concretion("a", "b")
>>> c.d = "d"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: "Concretion" object has no attribute "d"
``````

## Other cases to avoid slots:

• Avoid them when you want to perform `__class__` assignment with another class that doesn"t have them (and you can"t add them) unless the slot layouts are identical. (I am very interested in learning who is doing this and why.)
• Avoid them if you want to subclass variable length builtins like long, tuple, or str, and you want to add attributes to them.
• Avoid them if you insist on providing default values via class attributes for instance variables.

You may be able to tease out further caveats from the rest of the `__slots__` documentation (the 3.7 dev docs are the most current), which I have made significant recent contributions to.

The current top answers cite outdated information and are quite hand-wavy and miss the mark in some important ways.

### Do not "only use `__slots__` when instantiating lots of objects"

I quote:

"You would want to use `__slots__` if you are going to instantiate a lot (hundreds, thousands) of objects of the same class."

Abstract Base Classes, for example, from the `collections` module, are not instantiated, yet `__slots__` are declared for them.

Why?

If a user wishes to deny `__dict__` or `__weakref__` creation, those things must not be available in the parent classes.

`__slots__` contributes to reusability when creating interfaces or mixins.

It is true that many Python users aren"t writing for reusability, but when you are, having the option to deny unnecessary space usage is valuable.

### `__slots__` doesn"t break pickling

When pickling a slotted object, you may find it complains with a misleading `TypeError`:

``````>>> pickle.loads(pickle.dumps(f))
TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled
``````

This is actually incorrect. This message comes from the oldest protocol, which is the default. You can select the latest protocol with the `-1` argument. In Python 2.7 this would be `2` (which was introduced in 2.3), and in 3.6 it is `4`.

``````>>> pickle.loads(pickle.dumps(f, -1))
<__main__.Foo object at 0x1129C770>
``````

in Python 2.7:

``````>>> pickle.loads(pickle.dumps(f, 2))
<__main__.Foo object at 0x1129C770>
``````

in Python 3.6

``````>>> pickle.loads(pickle.dumps(f, 4))
<__main__.Foo object at 0x1129C770>
``````

So I would keep this in mind, as it is a solved problem.

## Critique of the (until Oct 2, 2016) accepted answer

The first paragraph is half short explanation, half predictive. Here"s the only part that actually answers the question

The proper use of `__slots__` is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This saves the overhead of one dict for every object that uses slots

The second half is wishful thinking, and off the mark:

While this is sometimes a useful optimization, it would be completely unnecessary if the Python interpreter was dynamic enough so that it would only require the dict when there actually were additions to the object.

Python actually does something similar to this, only creating the `__dict__` when it is accessed, but creating lots of objects with no data is fairly ridiculous.

The second paragraph oversimplifies and misses actual reasons to avoid `__slots__`. The below is not a real reason to avoid slots (for actual reasons, see the rest of my answer above.):

They change the behavior of the objects that have slots in a way that can be abused by control freaks and static typing weenies.

It then goes on to discuss other ways of accomplishing that perverse goal with Python, not discussing anything to do with `__slots__`.

The third paragraph is more wishful thinking. Together it is mostly off-the-mark content that the answerer didn"t even author and contributes to ammunition for critics of the site.

# Memory usage evidence

Create some normal objects and slotted objects:

``````>>> class Foo(object): pass
>>> class Bar(object): __slots__ = ()
``````

Instantiate a million of them:

``````>>> foos = [Foo() for f in xrange(1000000)]
>>> bars = [Bar() for b in xrange(1000000)]
``````

Inspect with `guppy.hpy().heap()`:

``````>>> guppy.hpy().heap()
Partition of a set of 2028259 objects. Total size = 99763360 bytes.
Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
0 1000000  49 64000000  64  64000000  64 __main__.Foo
1     169   0 16281480  16  80281480  80 list
2 1000000  49 16000000  16  96281480  97 __main__.Bar
3   12284   1   987472   1  97268952  97 str
...
``````

Access the regular objects and their `__dict__` and inspect again:

``````>>> for f in foos:
...     f.__dict__
>>> guppy.hpy().heap()
Partition of a set of 3028258 objects. Total size = 379763480 bytes.
Index  Count   %      Size    % Cumulative  % Kind (class / dict of class)
0 1000000  33 280000000  74 280000000  74 dict of __main__.Foo
1 1000000  33  64000000  17 344000000  91 __main__.Foo
2     169   0  16281480   4 360281480  95 list
3 1000000  33  16000000   4 376281480  99 __main__.Bar
4   12284   0    987472   0 377268952  99 str
...
``````

This is consistent with the history of Python, from Unifying types and classes in Python 2.2

If you subclass a built-in type, extra space is automatically added to the instances to accomodate `__dict__` and `__weakrefs__`. (The `__dict__` is not initialized until you use it though, so you shouldn"t worry about the space occupied by an empty dictionary for each instance you create.) If you don"t need this extra space, you can add the phrase "`__slots__ = []`" to your class.

# `os.listdir()` - list in the current directory

With listdir in os module you get the files and the folders in the current dir

`````` import os
arr = os.listdir()
print(arr)

>>> ["\$RECYCLE.BIN", "work.txt", "3ebooks.txt", "documents"]
``````

## Looking in a directory

``````arr = os.listdir("c:\files")
``````

# `glob` from glob

with glob you can specify a type of file to list like this

``````import glob

txtfiles = []
for file in glob.glob("*.txt"):
txtfiles.append(file)
``````

## `glob` in a list comprehension

``````mylist = [f for f in glob.glob("*.txt")]
``````

## get the full path of only files in the current directory

``````import os
from os import listdir
from os.path import isfile, join

cwd = os.getcwd()
onlyfiles = [os.path.join(cwd, f) for f in os.listdir(cwd) if
os.path.isfile(os.path.join(cwd, f))]
print(onlyfiles)

["G:\getfilesname\getfilesname.py", "G:\getfilesname\example.txt"]
``````

## Getting the full path name with `os.path.abspath`

You get the full path in return

`````` import os
files_path = [os.path.abspath(x) for x in os.listdir()]
print(files_path)

["F:\documentiapplications.txt", "F:\documenticollections.txt"]
``````

## Walk: going through sub directories

os.walk returns the root, the directories list and the files list, that is why I unpacked them in r, d, f in the for loop; it, then, looks for other files and directories in the subfolders of the root and so on until there are no subfolders.

``````import os

# Getting the current work directory (cwd)
thisdir = os.getcwd()

# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
for file in f:
if file.endswith(".docx"):
print(os.path.join(r, file))
``````

### `os.listdir()`: get files in the current directory (Python 2)

In Python 2, if you want the list of the files in the current directory, you have to give the argument as "." or os.getcwd() in the os.listdir method.

`````` import os
arr = os.listdir(".")
print(arr)

>>> ["\$RECYCLE.BIN", "work.txt", "3ebooks.txt", "documents"]
``````

### To go up in the directory tree

``````# Method 1
x = os.listdir("..")

# Method 2
x= os.listdir("/")
``````

### Get files: `os.listdir()` in a particular directory (Python 2 and 3)

`````` import os
arr = os.listdir("F:\python")
print(arr)

>>> ["\$RECYCLE.BIN", "work.txt", "3ebooks.txt", "documents"]
``````

### Get files of a particular subdirectory with `os.listdir()`

``````import os

x = os.listdir("./content")
``````

### `os.walk(".")` - current directory

`````` import os
arr = next(os.walk("."))[2]
print(arr)

>>> ["5bs_Turismo1.pdf", "5bs_Turismo1.pptx", "esperienza.txt"]
``````

### `next(os.walk("."))` and `os.path.join("dir", "file")`

`````` import os
arr = []
for d,r,f in next(os.walk("F:\_python")):
for file in f:
arr.append(os.path.join(r,file))

for f in arr:
print(files)

>>> F:\_python\dict_class.py
>>> F:\_python\programmi.txt
``````

### `next(os.walk("F:\")` - get the full path - list comprehension

`````` [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]

>>> ["F:\_python\dict_class.py", "F:\_python\programmi.txt"]
``````

### `os.walk` - get full path - all files in sub dirs**

``````x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
print(x)

``````

### `os.listdir()` - get only txt files

`````` arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
print(arr_txt)

>>> ["work.txt", "3ebooks.txt"]
``````

## Using `glob` to get the full path of the files

If I should need the absolute path of the files:

``````from path import path
from glob import glob
x = [path(f).abspath() for f in glob("F:\*.txt")]
for f in x:
print(f)

>>> F:acquistionline.txt
>>> F:acquisti_2018.txt
>>> F:ootstrap_jquery_ecc.txt
``````

## Using `os.path.isfile` to avoid directories in the list

``````import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)

>>> ["a simple game.py", "data.txt", "decorator.py"]
``````

## Using `pathlib` from Python 3.4

``````import pathlib

flist = []
for p in pathlib.Path(".").iterdir():
if p.is_file():
print(p)
flist.append(p)

>>> error.PNG
>>> exemaker.bat
>>> guiprova.mp3
>>> setup.py
>>> speak_gui2.py
>>> thumb.PNG
``````

With `list comprehension`:

``````flist = [p for p in pathlib.Path(".").iterdir() if p.is_file()]
``````

Alternatively, use `pathlib.Path()` instead of `pathlib.Path(".")`

## Use glob method in pathlib.Path()

``````import pathlib

py = pathlib.Path().glob("*.py")
for file in py:
print(file)

>>> stack_overflow_list.py
>>> stack_overflow_list_tkinter.py
``````

## Get all and only files with os.walk

``````import os
x = [i[2] for i in os.walk(".")]
y=[]
for t in x:
for f in t:
y.append(f)
print(y)

>>> ["append_to_list.py", "data.txt", "data1.txt", "data2.txt", "data_180617", "os_walk.py", "READ2.py", "read_data.py", "somma_defaltdic.py", "substitute_words.py", "sum_data.py", "data.txt", "data1.txt", "data_180617"]
``````

## Get only files with next and walk in a directory

`````` import os
x = next(os.walk("F://python"))[2]
print(x)

>>> ["calculator.bat","calculator.py"]
``````

## Get only directories with next and walk in a directory

`````` import os
next(os.walk("F://python"))[1] # for the current dir use (".")

>>> ["python3","others"]
``````

## Get all the subdir names with `walk`

``````for r,d,f in os.walk("F:\_python"):
for dirs in d:
print(dirs)

>>> .vscode
>>> pyexcel
>>> pyschool.py
>>> subtitles
>>> _metaprogramming
>>> .ipynb_checkpoints
``````

## `os.scandir()` from Python 3.5 and greater

``````import os
x = [f.name for f in os.scandir() if f.is_file()]
print(x)

>>> ["calculator.bat","calculator.py"]

# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.

import os
with os.scandir() as i:
for entry in i:
if entry.is_file():
print(entry.name)

>>> ebookmaker.py
>>> error.PNG
>>> exemaker.bat
>>> guiprova.mp3
>>> setup.py
>>> speakgui4.py
>>> speak_gui2.py
>>> speak_gui3.py
>>> thumb.PNG
``````

# Examples:

## Ex. 1: How many files are there in the subdirectories?

In this example, we look for the number of files that are included in all the directory and its subdirectories.

``````import os

def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"

print(count("F:\python"))

>>> "F:\python" : 12057 files"
``````

## Ex.2: How to copy all files from a directory to another?

A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.

``````import os
import shutil
from path import path

destination = "F:\file_copied"
# os.makedirs(destination)

def copyfile(dir, filetype="pptx", counter=0):
"Searches for pptx (or other - pptx is the default) files and copies them"
for pack in os.walk(dir):
for f in pack[2]:
if f.endswith(filetype):
fullpath = pack[0] + "\" + f
print(fullpath)
shutil.copy(fullpath, destination)
counter += 1
if counter > 0:
print("-" * 30)
print("	==> Found in: `" + dir + "` : " + str(counter) + " files
")

for dir in os.listdir():
"searches for folders that starts with `_`"
if dir[0] == "_":
# copyfile(dir, filetype="pdf")
copyfile(dir, filetype="txt")

>>> _compiti18Compito Contabilit√† 1conti.txt
>>> _compiti18Compito Contabilit√† 1modula4.txt
>>> _compiti18Compito Contabilit√† 1moduloa4.txt
>>> ------------------------
>>> ==> Found in: `_compiti18` : 3 files
``````

## Ex. 3: How to get all the files in a txt file

In case you want to create a txt file with all the file names:

``````import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
for eachfile in os.listdir():
mylist += eachfile + "
"
file.write(mylist)
``````

## Example: txt with all the files of an hard drive

``````"""
We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""

import os

# see all the methods of os
# print(*dir(os), sep=", ")
listafile = []
percorso = []
with open("lista_file.txt", "w", encoding="utf-8") as testo:
for root, dirs, files in os.walk("D:\"):
for file in files:
listafile.append(file)
percorso.append(root + "\" + file)
testo.write(file + "
")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
for file in listafile:
testo_ordinato.write(file + "
")

with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
for file in percorso:
file_percorso.write(file + "
")

os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")
``````

## All the file of C: in one text file

This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.

``````import os

with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
filewrite.write(f"{r + file}
")
``````

## How to write a file with all paths in a folder of a type

With this function you can create a txt file that will have the name of a type of file that you look for (ex. pngfile.txt) with all the full path of all the files of that type. It can be useful sometimes, I think.

``````import os

def searchfiles(extension=".ttf", folder="H:\"):
"Create a txt file with all the file of a type"
with open(extension[1:] + "file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk(folder):
for file in f:
if file.endswith(extension):
filewrite.write(f"{r + file}
")

# looking for png file (fonts) in the hard disk H:
searchfiles(".png", "H:\")

>>> H:4bs_18Dolphins5.png
>>> H:4bs_18Dolphins6.png
>>> H:4bs_18Dolphins7.png
>>> H:5_18marketing htmlassetsimageslogo2.png
>>> H:7z001.png
>>> H:7z002.png
``````

## (New) Find all files and open them with tkinter GUI

I just wanted to add in this 2019 a little app to search for all files in a dir and be able to open them by doubleclicking on the name of the file in the list.

``````import tkinter as tk
import os

def searchfiles(extension=".txt", folder="H:\"):
"insert all files in the listbox"
for r, d, f in os.walk(folder):
for file in f:
if file.endswith(extension):
lb.insert(0, r + "\" + file)

def open_file():
os.startfile(lb.get(lb.curselection()[0]))

root = tk.Tk()
root.geometry("400x400")
bt = tk.Button(root, text="Search", command=lambda:searchfiles(".png", "H:\"))
bt.pack()
lb = tk.Listbox(root)
lb.pack(fill="both", expand=1)
lb.bind("<Double-Button>", lambda x: open_file())
root.mainloop()
``````

I just used the following which was quite simple. First open a console then cd to where you"ve downloaded your file like some-package.whl and use

``````pip install some-package.whl
``````

Note: if pip.exe is not recognized, you may find it in the "Scripts" directory from where python has been installed. If pip is not installed, this page can help: How do I install pip on Windows?

Note: for clarification
If you copy the `*.whl` file to your local drive (ex. C:some-dirsome-file.whl) use the following command line parameters --

``````pip install C:/some-dir/some-file.whl
``````

The simplest way to get row counts per group is by calling `.size()`, which returns a `Series`:

``````df.groupby(["col1","col2"]).size()
``````

Usually you want this result as a `DataFrame` (instead of a `Series`) so you can do:

``````df.groupby(["col1", "col2"]).size().reset_index(name="counts")
``````

If you want to find out how to calculate the row counts and other statistics for each group continue reading below.

## Detailed example:

Consider the following example dataframe:

``````In [2]: df
Out[2]:
col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17
``````

First let"s use `.size()` to get the row counts:

``````In [3]: df.groupby(["col1", "col2"]).size()
Out[3]:
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64
``````

Then let"s use `.size().reset_index(name="counts")` to get the row counts:

``````In [4]: df.groupby(["col1", "col2"]).size().reset_index(name="counts")
Out[4]:
col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1
``````

### Including results for more statistics

When you want to calculate statistics on grouped data, it usually looks like this:

``````In [5]: (df
...: .groupby(["col1", "col2"])
...: .agg({
...:     "col3": ["mean", "count"],
...:     "col4": ["median", "min", "count"]
...: }))
Out[5]:
col4                  col3
median   min count      mean count
col1 col2
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1
``````

The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis.

To gain more control over the output I usually split the statistics into individual aggregations that I then combine using `join`. It looks like this:

``````In [6]: gb = df.groupby(["col1", "col2"])
...: counts = gb.size().to_frame(name="counts")
...: (counts
...:  .join(gb.agg({"col3": "mean"}).rename(columns={"col3": "col3_mean"}))
...:  .join(gb.agg({"col4": "median"}).rename(columns={"col4": "col4_median"}))
...:  .join(gb.agg({"col4": "min"}).rename(columns={"col4": "col4_min"}))
...:  .reset_index()
...: )
...:
Out[6]:
col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63
``````

### Footnotes

The code used to generate the test data is shown below:

``````In [1]: import numpy as np
...: import pandas as pd
...:
...: keys = np.array([
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["C", "D"],
...:         ["C", "D"],
...:         ["C", "D"],
...:         ["E", "F"],
...:         ["E", "F"],
...:         ["G", "H"]
...:         ])
...:
...: df = pd.DataFrame(
...:     np.hstack([keys,np.random.randn(10,4).round(2)]),
...:     columns = ["col1", "col2", "col3", "col4", "col5", "col6"]
...: )
...:
...: df[["col3", "col4", "col5", "col6"]] =
...:     df[["col3", "col4", "col5", "col6"]].astype(float)
...:
``````

Disclaimer:

If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop `NaN` entries in the mean calculation without telling you about it.

# Using a for loop, how do I access the loop index, from 1 to 5 in this case?

Use `enumerate` to get the index with the element as you iterate:

``````for index, item in enumerate(items):
print(index, item)
``````

And note that Python"s indexes start at zero, so you would get 0 to 4 with the above. If you want the count, 1 to 5, do this:

``````count = 0 # in case items is empty and you need it after the loop
for count, item in enumerate(items, start=1):
print(count, item)
``````

# Unidiomatic control flow

What you are asking for is the Pythonic equivalent of the following, which is the algorithm most programmers of lower-level languages would use:

``````index = 0            # Python"s indexing starts at zero
for item in items:   # Python"s for loops are a "for each" loop
print(index, item)
index += 1
``````

Or in languages that do not have a for-each loop:

``````index = 0
while index < len(items):
print(index, items[index])
index += 1
``````

or sometimes more commonly (but unidiomatically) found in Python:

``````for index in range(len(items)):
print(index, items[index])
``````

# Use the Enumerate Function

Python"s `enumerate` function reduces the visual clutter by hiding the accounting for the indexes, and encapsulating the iterable into another iterable (an `enumerate` object) that yields a two-item tuple of the index and the item that the original iterable would provide. That looks like this:

``````for index, item in enumerate(items, start=0):   # default is zero
print(index, item)
``````

This code sample is fairly well the canonical example of the difference between code that is idiomatic of Python and code that is not. Idiomatic code is sophisticated (but not complicated) Python, written in the way that it was intended to be used. Idiomatic code is expected by the designers of the language, which means that usually this code is not just more readable, but also more efficient.

## Getting a count

Even if you don"t need indexes as you go, but you need a count of the iterations (sometimes desirable) you can start with `1` and the final number will be your count.

``````count = 0 # in case items is empty
for count, item in enumerate(items, start=1):   # default is zero
print(item)

print("there were {0} items printed".format(count))
``````

The count seems to be more what you intend to ask for (as opposed to index) when you said you wanted from 1 to 5.

## Breaking it down - a step by step explanation

To break these examples down, say we have a list of items that we want to iterate over with an index:

``````items = ["a", "b", "c", "d", "e"]
``````

Now we pass this iterable to enumerate, creating an enumerate object:

``````enumerate_object = enumerate(items) # the enumerate object
``````

We can pull the first item out of this iterable that we would get in a loop with the `next` function:

``````iteration = next(enumerate_object) # first iteration from enumerate
print(iteration)
``````

And we see we get a tuple of `0`, the first index, and `"a"`, the first item:

``````(0, "a")
``````

we can use what is referred to as "sequence unpacking" to extract the elements from this two-tuple:

``````index, item = iteration
#   0,  "a" = (0, "a") # essentially this.
``````

and when we inspect `index`, we find it refers to the first index, 0, and `item` refers to the first item, `"a"`.

``````>>> print(index)
0
>>> print(item)
a
``````

# Conclusion

• Python indexes start at zero
• To get these indexes from an iterable as you iterate over it, use the enumerate function
• Using enumerate in the idiomatic way (along with tuple unpacking) creates code that is more readable and maintainable:

So do this:

``````for index, item in enumerate(items, start=0):   # Python indexes start at zero
print(index, item)
``````

Getting some sort of modification date in a cross-platform way is easy - just call `os.path.getmtime(path)` and you"ll get the Unix timestamp of when the file at `path` was last modified.

Getting file creation dates, on the other hand, is fiddly and platform-dependent, differing even between the three big OSes:

Putting this all together, cross-platform code should look something like this...

``````import os
import platform

def creation_date(path_to_file):
"""
Try to get the date that a file was created, falling back to when it was
See http://stackoverflow.com/a/39501288/1709587 for explanation.
"""
if platform.system() == "Windows":
return os.path.getctime(path_to_file)
else:
stat = os.stat(path_to_file)
try:
return stat.st_birthtime
except AttributeError:
# We"re probably on Linux. No easy way to get creation dates here,
return stat.st_mtime
``````

I noticed that every now and then I need to Google fopen all over again, just to build a mental image of what the primary differences between the modes are. So, I thought a diagram will be faster to read next time. Maybe someone else will find that helpful too.

I would suggest using the duplicated method on the Pandas Index itself:

``````df3 = df3[~df3.index.duplicated(keep="first")]
``````

While all the other methods work, `.drop_duplicates` is by far the least performant for the provided example. Furthermore, while the groupby method is only slightly less performant, I find the duplicated method to be more readable.

Using the sample data provided:

``````>>> %timeit df3.reset_index().drop_duplicates(subset="index", keep="first").set_index("index")
1000 loops, best of 3: 1.54 ms per loop

>>> %timeit df3.groupby(df3.index).first()
1000 loops, best of 3: 580 ¬µs per loop

>>> %timeit df3[~df3.index.duplicated(keep="first")]
1000 loops, best of 3: 307 ¬µs per loop
``````

Note that you can keep the last element by changing the keep argument to `"last"`.

It should also be noted that this method works with `MultiIndex` as well (using df1 as specified in Paul"s example):

``````>>> %timeit df1.groupby(level=df1.index.names).last()
1000 loops, best of 3: 771 ¬µs per loop

>>> %timeit df1[~df1.index.duplicated(keep="last")]
1000 loops, best of 3: 365 ¬µs per loop
``````

Here"s a concise solution which avoids regular expressions and slow in-Python loops:

``````def principal_period(s):
i = (s+s).find(s, 1, -1)
return None if i == -1 else s[:i]
``````

See the Community Wiki answer started by @davidism for benchmark results. In summary,

David Zhang"s solution is the clear winner, outperforming all others by at least 5x for the large example set.

This is based on the observation that a string is periodic if and only if it is equal to a nontrivial rotation of itself. Kudos to @AleksiTorhamo for realizing that we can then recover the principal period from the index of the first occurrence of `s` in `(s+s)[1:-1]`, and for informing me of the optional `start` and `end` arguments of Python"s `string.find`.

## Meaning of @classmethod and @staticmethod for beginner?

### Question by user1632861

Could someone explain to me the meaning of `@classmethod` and `@staticmethod` in python? I need to know the difference and the meaning.

As far as I understand, `@classmethod` tells a class that it"s a method which should be inherited into subclasses, or... something. However, what"s the point of that? Why not just define the class method without adding `@classmethod` or `@staticmethod` or any `@` definitions?

tl;dr: when should I use them, why should I use them, and how should I use them?

## What is the meaning of single and double underscore before an object name?

Can someone please explain the exact meaning of having single and double leading underscores before an object"s name in Python, and the difference between both?

Also, does that meaning stay the same regardless of whether the object in question is a variable, a function, a method, etc.?

## What does -> mean in Python function definitions?

I"ve recently noticed something interesting when looking at Python 3.3 grammar specification:

``````funcdef: "def" NAME parameters ["->" test] ":" suite
``````

The optional "arrow" block was absent in Python 2 and I couldn"t find any information regarding its meaning in Python 3. It turns out this is correct Python and it"s accepted by the interpreter:

``````def f(x) -> 123:
return x
``````

I thought that this might be some kind of a precondition syntax, but:

• I cannot test `x` here, as it is still undefined,
• No matter what I put after the arrow (e.g. `2 < 1`), it doesn"t affect the function behavior.

Could anyone accustomed with this syntax style explain it?

## What does the star and doublestar operator mean in a function call?

What does the `*` operator mean in Python, such as in code like `zip(*x)` or `f(**k)`?

1. How is it handled internally in the interpreter?
2. Does it affect performance at all? Is it fast or slow?
3. When is it useful and when is it not?
4. Should it be used in a function declaration or in a call?

## Get statistics for each group (such as count, mean, etc) using pandas GroupBy?

I have a data frame `df` and I use several columns from it to `groupby`:

``````df["col1","col2","col3","col4"].groupby(["col1","col2"]).mean()
``````

In the above way I almost get the table (data frame) that I need. What is missing is an additional column that contains number of rows in each group. In other words, I have mean but I also would like to know how many number were used to get these means. For example in the first group there are 8 values and in the second one 10 and so on.

In short: How do I get group-wise statistics for a dataframe?

## What does -1 mean in numpy reshape?

A numpy matrix can be reshaped into a vector using reshape function with parameter -1. But I don"t know what -1 means here.

For example:

``````a = numpy.matrix([[1, 2, 3, 4], [5, 6, 7, 8]])
b = numpy.reshape(a, -1)
``````

The result of `b` is: `matrix([[1, 2, 3, 4, 5, 6, 7, 8]])`

Does anyone know what -1 means here? And it seems python assign -1 several meanings, such as: `array[-1]` means the last element. Can you give an explanation?

## In Matplotlib, what does the argument mean in fig.add_subplot(111)?

Sometimes I come across code such as this:

``````import matplotlib.pyplot as plt
x = [1, 2, 3, 4, 5]
y = [1, 4, 9, 16, 25]
fig = plt.figure()
plt.scatter(x, y)
plt.show()
``````

Which produces:

I"ve been reading the documentation like crazy but I can"t find an explanation for the `111`. sometimes I see a `212`.

What does the argument of `fig.add_subplot()` mean?

## What does it mean if a Python object is "subscriptable" or not?

### Question by Alistair

Which types of objects fall into the domain of "subscriptable"?

## What does "SyntaxError: Missing parentheses in call to "print"" mean in Python?

When I try to use a `print` statement in Python, it gives me this error:

``````>>> print "Hello, World!"
File "<stdin>", line 1
print "Hello, World!"
^
SyntaxError: Missing parentheses in call to "print"
``````

What does that mean?

## What does `ValueError: cannot reindex from a duplicate axis` mean?

I am getting a `ValueError: cannot reindex from a duplicate axis` when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it.

Here is my session inside of `ipdb` trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create `sum` index for sum of all columns I am getting `ValueError: cannot reindex from a duplicate axis` error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing?

I don"t really understand what `ValueError: cannot reindex from a duplicate axis`means, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question.

``````ipdb> type(affinity_matrix)
<class "pandas.core.frame.DataFrame">
ipdb> affinity_matrix.shape
(333, 10)
ipdb> affinity_matrix.columns
Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype="int64")
ipdb> affinity_matrix.index
Index([u"001", u"002", u"003", u"004", u"005", u"008", u"009", u"010", u"011", u"014", u"015", u"016", u"018", u"020", u"021", u"022", u"024", u"025", u"026", u"027", u"028", u"029", u"030", u"032", u"033", u"034", u"035", u"036", u"039", u"040", u"041", u"042", u"043", u"044", u"045", u"047", u"047", u"048", u"050", u"053", u"054", u"055", u"056", u"057", u"058", u"059", u"060", u"061", u"062", u"063", u"065", u"067", u"068", u"069", u"070", u"071", u"072", u"073", u"074", u"075", u"076", u"077", u"078", u"080", u"082", u"083", u"084", u"085", u"086", u"089", u"090", u"091", u"092", u"093", u"094", u"095", u"096", u"097", u"098", u"100", u"101", u"103", u"104", u"105", u"106", u"107", u"108", u"109", u"110", u"111", u"112", u"113", u"114", u"115", u"116", u"117", u"118", u"119", u"121", u"122", ...], dtype="object")

ipdb> affinity_matrix.values.dtype
dtype("float64")
ipdb> "sums" in affinity_matrix.index
False
``````

Here is the error:

``````ipdb> affinity_matrix.loc["sums"] = affinity_matrix.sum(axis=0)
*** ValueError: cannot reindex from a duplicate axis
``````

I tried to reproduce this with a simple example, but I failed

``````In [32]: import pandas as pd

In [33]: import numpy as np

In [34]: a = np.arange(35).reshape(5,7)

In [35]: df = pd.DataFrame(a, ["x", "y", "u", "z", "w"], range(10, 17))

In [36]: df.values.dtype
Out[36]: dtype("int64")

In [37]: df.loc["sums"] = df.sum(axis=0)

In [38]: df
Out[38]:
10  11  12  13  14  15   16
x      0   1   2   3   4   5    6
y      7   8   9  10  11  12   13
u     14  15  16  17  18  19   20
z     21  22  23  24  25  26   27
w     28  29  30  31  32  33   34
sums  70  75  80  85  90  95  100
``````

# Recommendation for beginners:

This is my personal recommendation for beginners: start by learning `virtualenv` and `pip`, tools which work with both Python 2 and 3 and in a variety of situations, and pick up other tools once you start needing them.

# PyPI packages not in the standard library:

• `virtualenv` is a very popular tool that creates isolated Python environments for Python libraries. If you"re not familiar with this tool, I highly recommend learning it, as it is a very useful tool, and I"ll be making comparisons to it for the rest of this answer.

It works by installing a bunch of files in a directory (eg: `env/`), and then modifying the `PATH` environment variable to prefix it with a custom `bin` directory (eg: `env/bin/`). An exact copy of the `python` or `python3` binary is placed in this directory, but Python is programmed to look for libraries relative to its path first, in the environment directory. It"s not part of Python"s standard library, but is officially blessed by the PyPA (Python Packaging Authority). Once activated, you can install packages in the virtual environment using `pip`.

• `pyenv` is used to isolate Python versions. For example, you may want to test your code against Python 2.7, 3.6, 3.7 and 3.8, so you"ll need a way to switch between them. Once activated, it prefixes the `PATH` environment variable with `~/.pyenv/shims`, where there are special files matching the Python commands (`python`, `pip`). These are not copies of the Python-shipped commands; they are special scripts that decide on the fly which version of Python to run based on the `PYENV_VERSION` environment variable, or the `.python-version` file, or the `~/.pyenv/version` file. `pyenv` also makes the process of downloading and installing multiple Python versions easier, using the command `pyenv install`.

• `pyenv-virtualenv` is a plugin for `pyenv` by the same author as `pyenv`, to allow you to use `pyenv` and `virtualenv` at the same time conveniently. However, if you"re using Python 3.3 or later, `pyenv-virtualenv` will try to run `python -m venv` if it is available, instead of `virtualenv`. You can use `virtualenv` and `pyenv` together without `pyenv-virtualenv`, if you don"t want the convenience features.

• `virtualenvwrapper` is a set of extensions to `virtualenv` (see docs). It gives you commands like `mkvirtualenv`, `lssitepackages`, and especially `workon` for switching between different `virtualenv` directories. This tool is especially useful if you want multiple `virtualenv` directories.

• `pyenv-virtualenvwrapper` is a plugin for `pyenv` by the same author as `pyenv`, to conveniently integrate `virtualenvwrapper` into `pyenv`.

• `pipenv` aims to combine `Pipfile`, `pip` and `virtualenv` into one command on the command-line. The `virtualenv` directory typically gets placed in `~/.local/share/virtualenvs/XXX`, with `XXX` being a hash of the path of the project directory. This is different from `virtualenv`, where the directory is typically in the current working directory. `pipenv` is meant to be used when developing Python applications (as opposed to libraries). There are alternatives to `pipenv`, such as `poetry`, which I won"t list here since this question is only about the packages that are similarly named.

# Standard library:

• `pyvenv` (not to be confused with `pyenv` in the previous section) is a script shipped with Python 3 but deprecated in Python 3.6 as it had problems (not to mention the confusing name). In Python 3.6+, the exact equivalent is `python3 -m venv`.

• `venv` is a package shipped with Python 3, which you can run using `python3 -m venv` (although for some reason some distros separate it out into a separate distro package, such as `python3-venv` on Ubuntu/Debian). It serves the same purpose as `virtualenv`, but only has a subset of its features (see a comparison here). `virtualenv` continues to be more popular than `venv`, especially since the former supports both Python 2 and 3.

You have four main options for converting types in pandas:

1. `to_numeric()` - provides functionality to safely convert non-numeric types (e.g. strings) to a suitable numeric type. (See also `to_datetime()` and `to_timedelta()`.)

2. `astype()` - convert (almost) any type to (almost) any other type (even if it"s not necessarily sensible to do so). Also allows you to convert to categorial types (very useful).

3. `infer_objects()` - a utility method to convert object columns holding Python objects to a pandas type if possible.

4. `convert_dtypes()` - convert DataFrame columns to the "best possible" dtype that supports `pd.NA` (pandas" object to indicate a missing value).

Read on for more detailed explanations and usage of each of these methods.

# 1. `to_numeric()`

The best way to convert one or more columns of a DataFrame to numeric values is to use `pandas.to_numeric()`.

This function will try to change non-numeric objects (such as strings) into integers or floating point numbers as appropriate.

## Basic usage

The input to `to_numeric()` is a Series or a single column of a DataFrame.

``````>>> s = pd.Series(["8", 6, "7.5", 3, "0.9"]) # mixed string and numeric values
>>> s
0      8
1      6
2    7.5
3      3
4    0.9
dtype: object

>>> pd.to_numeric(s) # convert everything to float values
0    8.0
1    6.0
2    7.5
3    3.0
4    0.9
dtype: float64
``````

As you can see, a new Series is returned. Remember to assign this output to a variable or column name to continue using it:

``````# convert Series
my_series = pd.to_numeric(my_series)

# convert column "a" of a DataFrame
df["a"] = pd.to_numeric(df["a"])
``````

You can also use it to convert multiple columns of a DataFrame via the `apply()` method:

``````# convert all columns of DataFrame
df = df.apply(pd.to_numeric) # convert all columns of DataFrame

# convert just columns "a" and "b"
df[["a", "b"]] = df[["a", "b"]].apply(pd.to_numeric)
``````

As long as your values can all be converted, that"s probably all you need.

## Error handling

But what if some values can"t be converted to a numeric type?

`to_numeric()` also takes an `errors` keyword argument that allows you to force non-numeric values to be `NaN`, or simply ignore columns containing these values.

Here"s an example using a Series of strings `s` which has the object dtype:

``````>>> s = pd.Series(["1", "2", "4.7", "pandas", "10"])
>>> s
0         1
1         2
2       4.7
3    pandas
4        10
dtype: object
``````

The default behaviour is to raise if it can"t convert a value. In this case, it can"t cope with the string "pandas":

``````>>> pd.to_numeric(s) # or pd.to_numeric(s, errors="raise")
ValueError: Unable to parse string
``````

Rather than fail, we might want "pandas" to be considered a missing/bad numeric value. We can coerce invalid values to `NaN` as follows using the `errors` keyword argument:

``````>>> pd.to_numeric(s, errors="coerce")
0     1.0
1     2.0
2     4.7
3     NaN
4    10.0
dtype: float64
``````

The third option for `errors` is just to ignore the operation if an invalid value is encountered:

``````>>> pd.to_numeric(s, errors="ignore")
# the original Series is returned untouched
``````

This last option is particularly useful when you want to convert your entire DataFrame, but don"t not know which of our columns can be converted reliably to a numeric type. In that case just write:

``````df.apply(pd.to_numeric, errors="ignore")
``````

The function will be applied to each column of the DataFrame. Columns that can be converted to a numeric type will be converted, while columns that cannot (e.g. they contain non-digit strings or dates) will be left alone.

## Downcasting

By default, conversion with `to_numeric()` will give you either a `int64` or `float64` dtype (or whatever integer width is native to your platform).

That"s usually what you want, but what if you wanted to save some memory and use a more compact dtype, like `float32`, or `int8`?

`to_numeric()` gives you the option to downcast to either "integer", "signed", "unsigned", "float". Here"s an example for a simple series `s` of integer type:

``````>>> s = pd.Series([1, 2, -7])
>>> s
0    1
1    2
2   -7
dtype: int64
``````

Downcasting to "integer" uses the smallest possible integer that can hold the values:

``````>>> pd.to_numeric(s, downcast="integer")
0    1
1    2
2   -7
dtype: int8
``````

Downcasting to "float" similarly picks a smaller than normal floating type:

``````>>> pd.to_numeric(s, downcast="float")
0    1.0
1    2.0
2   -7.0
dtype: float32
``````

# 2. `astype()`

The `astype()` method enables you to be explicit about the dtype you want your DataFrame or Series to have. It"s very versatile in that you can try and go from one type to the any other.

## Basic usage

Just pick a type: you can use a NumPy dtype (e.g. `np.int16`), some Python types (e.g. bool), or pandas-specific types (like the categorical dtype).

Call the method on the object you want to convert and `astype()` will try and convert it for you:

``````# convert all DataFrame columns to the int64 dtype
df = df.astype(int)

# convert column "a" to int64 dtype and "b" to complex type
df = df.astype({"a": int, "b": complex})

# convert Series to float16 type
s = s.astype(np.float16)

# convert Series to Python strings
s = s.astype(str)

# convert Series to categorical type - see docs for more details
s = s.astype("category")
``````

Notice I said "try" - if `astype()` does not know how to convert a value in the Series or DataFrame, it will raise an error. For example if you have a `NaN` or `inf` value you"ll get an error trying to convert it to an integer.

As of pandas 0.20.0, this error can be suppressed by passing `errors="ignore"`. Your original object will be return untouched.

## Be careful

`astype()` is powerful, but it will sometimes convert values "incorrectly". For example:

``````>>> s = pd.Series([1, 2, -7])
>>> s
0    1
1    2
2   -7
dtype: int64
``````

These are small integers, so how about converting to an unsigned 8-bit type to save memory?

``````>>> s.astype(np.uint8)
0      1
1      2
2    249
dtype: uint8
``````

The conversion worked, but the -7 was wrapped round to become 249 (i.e. 28 - 7)!

Trying to downcast using `pd.to_numeric(s, downcast="unsigned")` instead could help prevent this error.

# 3. `infer_objects()`

Version 0.21.0 of pandas introduced the method `infer_objects()` for converting columns of a DataFrame that have an object datatype to a more specific type (soft conversions).

For example, here"s a DataFrame with two columns of object type. One holds actual integers and the other holds strings representing integers:

``````>>> df = pd.DataFrame({"a": [7, 1, 5], "b": ["3","2","1"]}, dtype="object")
>>> df.dtypes
a    object
b    object
dtype: object
``````

Using `infer_objects()`, you can change the type of column "a" to int64:

``````>>> df = df.infer_objects()
>>> df.dtypes
a     int64
b    object
dtype: object
``````

Column "b" has been left alone since its values were strings, not integers. If you wanted to try and force the conversion of both columns to an integer type, you could use `df.astype(int)` instead.

# 4. `convert_dtypes()`

Version 1.0 and above includes a method `convert_dtypes()` to convert Series and DataFrame columns to the best possible dtype that supports the `pd.NA` missing value.

Here "best possible" means the type most suited to hold the values. For example, this a pandas integer type if all of the values are integers (or missing values): an object column of Python integer objects is converted to `Int64`, a column of NumPy `int32` values will become the pandas dtype `Int32`.

With our `object` DataFrame `df`, we get the following result:

``````>>> df.convert_dtypes().dtypes
a     Int64
b    string
dtype: object
``````

Since column "a" held integer values, it was converted to the `Int64` type (which is capable of holding missing values, unlike `int64`).

Column "b" contained string objects, so was changed to pandas" `string` dtype.

By default, this method will infer the type from object values in each column. We can change this by passing `infer_objects=False`:

``````>>> df.convert_dtypes(infer_objects=False).dtypes
a    object
b    string
dtype: object
``````

Now column "a" remained an object column: pandas knows it can be described as an "integer" column (internally it ran `infer_dtype`) but didn"t infer exactly what dtype of integer it should have so did not convert it. Column "b" was again converted to "string" dtype as it was recognised as holding "string" values.

## How to iterate over rows in a DataFrame in Pandas?

Iteration in Pandas is an anti-pattern and is something you should only do when you have exhausted every other option. You should not use any function with "`iter`" in its name for more than a few thousand rows or you will have to get used to a lot of waiting.

Do you want to print a DataFrame? Use `DataFrame.to_string()`.

Do you want to compute something? In that case, search for methods in this order (list modified from here):

1. Vectorization
2. Cython routines
3. List Comprehensions (vanilla `for` loop)
4. `DataFrame.apply()`: i) ¬†Reductions that can be performed in Cython, ii) Iteration in Python space
5. `DataFrame.itertuples()` and `iteritems()`
6. `DataFrame.iterrows()`

`iterrows` and `itertuples` (both receiving many votes in answers to this question) should be used in very rare circumstances, such as generating row objects/nametuples for sequential processing, which is really the only thing these functions are useful for.

Appeal to Authority

The documentation page on iteration has a huge red warning box that says:

Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed [...].

* It"s actually a little more complicated than "don"t". `df.iterrows()` is the correct answer to this question, but "vectorize your ops" is the better one. I will concede that there are circumstances where iteration cannot be avoided (for example, some operations where the result depends on the value computed for the previous row). However, it takes some familiarity with the library to know when. If you"re not sure whether you need an iterative solution, you probably don"t. PS: To know more about my rationale for writing this answer, skip to the very bottom.

## Faster than Looping: Vectorization, Cython

A good number of basic operations and computations are "vectorised" by pandas (either through NumPy, or through Cythonized functions). This includes arithmetic, comparisons, (most) reductions, reshaping (such as pivoting), joins, and groupby operations. Look through the documentation on Essential Basic Functionality to find a suitable vectorised method for your problem.

If none exists, feel free to write your own using custom Cython extensions.

## Next Best Thing: List Comprehensions*

List comprehensions should be your next port of call if 1) there is no vectorized solution available, 2) performance is important, but not important enough to go through the hassle of cythonizing your code, and 3) you"re trying to perform elementwise transformation on your code. There is a good amount of evidence to suggest that list comprehensions are sufficiently fast (and even sometimes faster) for many common Pandas tasks.

The formula is simple,

``````# Iterating over one column - `f` is some function that processes your data
result = [f(x) for x in df["col"]]
# Iterating over two columns, use `zip`
result = [f(x, y) for x, y in zip(df["col1"], df["col2"])]
# Iterating over multiple columns - same data type
result = [f(row[0], ..., row[n]) for row in df[["col1", ...,"coln"]].to_numpy()]
# Iterating over multiple columns - differing data type
result = [f(row[0], ..., row[n]) for row in zip(df["col1"], ..., df["coln"])]
``````

If you can encapsulate your business logic into a function, you can use a list comprehension that calls it. You can make arbitrarily complex things work through the simplicity and speed of raw Python code.

Caveats

List comprehensions assume that your data is easy to work with - what that means is your data types are consistent and you don"t have NaNs, but this cannot always be guaranteed.

1. The first one is more obvious, but when dealing with NaNs, prefer in-built pandas methods if they exist (because they have much better corner-case handling logic), or ensure your business logic includes appropriate NaN handling logic.
2. When dealing with mixed data types you should iterate over `zip(df["A"], df["B"], ...)` instead of `df[["A", "B"]].to_numpy()` as the latter implicitly upcasts data to the most common type. As an example if A is numeric and B is string, `to_numpy()` will cast the entire array to string, which may not be what you want. Fortunately `zip`ping your columns together is the most straightforward workaround to this.

*Your mileage may vary for the reasons outlined in the Caveats section above.

## An Obvious Example

Let"s demonstrate the difference with a simple example of adding two pandas columns `A + B`. This is a vectorizable operaton, so it will be easy to contrast the performance of the methods discussed above.

Benchmarking code, for your reference. The line at the bottom measures a function written in numpandas, a style of Pandas that mixes heavily with NumPy to squeeze out maximum performance. Writing numpandas code should be avoided unless you know what you"re doing. Stick to the API where you can (i.e., prefer `vec` over `vec_numpy`).

I should mention, however, that it isn"t always this cut and dry. Sometimes the answer to "what is the best method for an operation" is "it depends on your data". My advice is to test out different approaches on your data before settling on one.

* Pandas string methods are "vectorized" in the sense that they are specified on the series but operate on each element. The underlying mechanisms are still iterative, because string operations are inherently hard to vectorize.

## Why I Wrote this Answer

A common trend I notice from new users is to ask questions of the form "How can I iterate over my df to do X?". Showing code that calls `iterrows()` while doing something inside a `for` loop. Here is why. A new user to the library who has not been introduced to the concept of vectorization will likely envision the code that solves their problem as iterating over their data to do something. Not knowing how to iterate over a DataFrame, the first thing they do is Google it and end up here, at this question. They then see the accepted answer telling them how to, and they close their eyes and run this code without ever first questioning if iteration is not the right thing to do.

The aim of this answer is to help new users understand that iteration is not necessarily the solution to every problem, and that better, faster and more idiomatic solutions could exist, and that it is worth investing time in exploring them. I"m not trying to start a war of iteration vs. vectorization, but I want new users to be informed when developing solutions to their problems with this library.

This is the behaviour to adopt when the referenced object is deleted. It is not specific to Django; this is an SQL standard. Although Django has its own implementation on top of SQL. (1)

There are seven possible actions to take when such event occurs:

• `CASCADE`: When the referenced object is deleted, also delete the objects that have references to it (when you remove a blog post for instance, you might want to delete comments as well). SQL equivalent: `CASCADE`.
• `PROTECT`: Forbid the deletion of the referenced object. To delete it you will have to delete all objects that reference it manually. SQL equivalent: `RESTRICT`.
• `RESTRICT`: (introduced in Django 3.1) Similar behavior as `PROTECT` that matches SQL"s `RESTRICT` more accurately. (See django documentation example)
• `SET_NULL`: Set the reference to NULL (requires the field to be nullable). For instance, when you delete a User, you might want to keep the comments he posted on blog posts, but say it was posted by an anonymous (or deleted) user. SQL equivalent: `SET NULL`.
• `SET_DEFAULT`: Set the default value. SQL equivalent: `SET DEFAULT`.
• `SET(...)`: Set a given value. This one is not part of the SQL standard and is entirely handled by Django.
• `DO_NOTHING`: Probably a very bad idea since this would create integrity issues in your database (referencing an object that actually doesn"t exist). SQL equivalent: `NO ACTION`. (2)

Source: Django documentation

In most cases, `CASCADE` is the expected behaviour, but for every ForeignKey, you should always ask yourself what is the expected behaviour in this situation. `PROTECT` and `SET_NULL` are often useful. Setting `CASCADE` where it should not, can potentially delete all of your database in cascade, by simply deleting a single user.

It"s funny to notice that the direction of the `CASCADE` action is not clear to many people. Actually, it"s funny to notice that only the `CASCADE` action is not clear. I understand the cascade behavior might be confusing, however you must think that it is the same direction as any other action. Thus, if you feel that `CASCADE` direction is not clear to you, it actually means that `on_delete` behavior is not clear to you.

In your database, a foreign key is basically represented by an integer field which value is the primary key of the foreign object. Let"s say you have an entry comment_A, which has a foreign key to an entry article_B. If you delete the entry comment_A, everything is fine. article_B used to live without comment_A and don"t bother if it"s deleted. However, if you delete article_B, then comment_A panics! It never lived without article_B and needs it, and it"s part of its attributes (`article=article_B`, but what is article_B???). This is where `on_delete` steps in, to determine how to resolve this integrity error, either by saying:

• "No! Please! Don"t! I can"t live without you!" (which is said `PROTECT` or `RESTRICT` in Django/SQL)
• "All right, if I"m not yours, then I"m nobody"s" (which is said `SET_NULL`)
• "Good bye world, I can"t live without article_B" and commit suicide (this is the `CASCADE` behavior).
• "It"s OK, I"ve got spare lover, and I"ll reference article_C from now" (`SET_DEFAULT`, or even `SET(...)`).
• "I can"t face reality, and I"ll keep calling your name even if that"s the only thing left to me!" (`DO_NOTHING`)

I hope it makes cascade direction clearer. :)

Footnotes

(1) Django has its own implementation on top of SQL. And, as mentioned by @JoeMjr2 in the comments below, Django will not create the SQL constraints. If you want the constraints to be ensured by your database (for instance, if your database is used by another application, or if you hang in the database console from time to time), you might want to set the related constraints manually yourself. There is an open ticket to add support for database-level on delete constrains in Django.

(2) Actually, there is one case where `DO_NOTHING` can be useful: If you want to skip Django"s implementation and implement the constraint yourself at the database-level.

## Label vs. Location

The main distinction between the two methods is:

• `loc` gets rows (and/or columns) with particular labels.

• `iloc` gets rows (and/or columns) at integer locations.

To demonstrate, consider a series `s` of characters with a non-monotonic integer index:

``````>>> s = pd.Series(list("abcdef"), index=[49, 48, 47, 0, 1, 2])
49    a
48    b
47    c
0     d
1     e
2     f

>>> s.loc[0]    # value at index label 0
"d"

>>> s.iloc[0]   # value at index location 0
"a"

>>> s.loc[0:1]  # rows at index labels between 0 and 1 (inclusive)
0    d
1    e

>>> s.iloc[0:1] # rows at index location between 0 and 1 (exclusive)
49    a
``````

Here are some of the differences/similarities between `s.loc` and `s.iloc` when passed various objects:

<object> description `s.loc[<object>]` `s.iloc[<object>]`
`0` single item Value at index label `0` (the string `"d"`) Value at index location 0 (the string `"a"`)
`0:1` slice Two rows (labels `0` and `1`) One row (first row at location 0)
`1:47` slice with out-of-bounds end Zero rows (empty Series) Five rows (location 1 onwards)
`1:47:-1` slice with negative step three rows (labels `1` back to `47`) Zero rows (empty Series)
`[2, 0]` integer list Two rows with given labels Two rows with given locations
`s > "e"` Bool series (indicating which values have the property) One row (containing `"f"`) `NotImplementedError`
`(s>"e").values` Bool array One row (containing `"f"`) Same as `loc`
`999` int object not in index `KeyError` `IndexError` (out of bounds)
`-1` int object not in index `KeyError` Returns last value in `s`
`lambda x: x.index[3]` callable applied to series (here returning 3rd item in index) `s.loc[s.index[3]]` `s.iloc[s.index[3]]`

`loc`"s label-querying capabilities extend well-beyond integer indexes and it"s worth highlighting a couple of additional examples.

Here"s a Series where the index contains string objects:

``````>>> s2 = pd.Series(s.index, index=s.values)
>>> s2
a    49
b    48
c    47
d     0
e     1
f     2
``````

Since `loc` is label-based, it can fetch the first value in the Series using `s2.loc["a"]`. It can also slice with non-integer objects:

``````>>> s2.loc["c":"e"]  # all rows lying between "c" and "e" (inclusive)
c    47
d     0
e     1
``````

For DateTime indexes, we don"t need to pass the exact date/time to fetch by label. For example:

``````>>> s3 = pd.Series(list("abcde"), pd.date_range("now", periods=5, freq="M"))
>>> s3
2021-01-31 16:41:31.879768    a
2021-02-28 16:41:31.879768    b
2021-03-31 16:41:31.879768    c
2021-04-30 16:41:31.879768    d
2021-05-31 16:41:31.879768    e
``````

Then to fetch the row(s) for March/April 2021 we only need:

``````>>> s3.loc["2021-03":"2021-04"]
2021-03-31 17:04:30.742316    c
2021-04-30 17:04:30.742316    d
``````

## Rows and Columns

`loc` and `iloc` work the same way with DataFrames as they do with Series. It"s useful to note that both methods can address columns and rows together.

When given a tuple, the first element is used to index the rows and, if it exists, the second element is used to index the columns.

Consider the DataFrame defined below:

``````>>> import numpy as np
>>> df = pd.DataFrame(np.arange(25).reshape(5, 5),
index=list("abcde"),
columns=["x","y","z", 8, 9])
>>> df
x   y   z   8   9
a   0   1   2   3   4
b   5   6   7   8   9
c  10  11  12  13  14
d  15  16  17  18  19
e  20  21  22  23  24
``````

Then for example:

``````>>> df.loc["c": , :"z"]  # rows "c" and onwards AND columns up to "z"
x   y   z
c  10  11  12
d  15  16  17
e  20  21  22

>>> df.iloc[:, 3]        # all rows, but only the column at index location 3
a     3
b     8
c    13
d    18
e    23
``````

Sometimes we want to mix label and positional indexing methods for the rows and columns, somehow combining the capabilities of `loc` and `iloc`.

For example, consider the following DataFrame. How best to slice the rows up to and including "c" and take the first four columns?

``````>>> import numpy as np
>>> df = pd.DataFrame(np.arange(25).reshape(5, 5),
index=list("abcde"),
columns=["x","y","z", 8, 9])
>>> df
x   y   z   8   9
a   0   1   2   3   4
b   5   6   7   8   9
c  10  11  12  13  14
d  15  16  17  18  19
e  20  21  22  23  24
``````

We can achieve this result using `iloc` and the help of another method:

``````>>> df.iloc[:df.index.get_loc("c") + 1, :4]
x   y   z   8
a   0   1   2   3
b   5   6   7   8
c  10  11  12  13
``````

`get_loc()` is an index method meaning "get the position of the label in this index". Note that since slicing with `iloc` is exclusive of its endpoint, we must add 1 to this value if we want row "c" as well.

The simplest way to get row counts per group is by calling `.size()`, which returns a `Series`:

``````df.groupby(["col1","col2"]).size()
``````

Usually you want this result as a `DataFrame` (instead of a `Series`) so you can do:

``````df.groupby(["col1", "col2"]).size().reset_index(name="counts")
``````

If you want to find out how to calculate the row counts and other statistics for each group continue reading below.

## Detailed example:

Consider the following example dataframe:

``````In [2]: df
Out[2]:
col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17
``````

First let"s use `.size()` to get the row counts:

``````In [3]: df.groupby(["col1", "col2"]).size()
Out[3]:
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64
``````

Then let"s use `.size().reset_index(name="counts")` to get the row counts:

``````In [4]: df.groupby(["col1", "col2"]).size().reset_index(name="counts")
Out[4]:
col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1
``````

### Including results for more statistics

When you want to calculate statistics on grouped data, it usually looks like this:

``````In [5]: (df
...: .groupby(["col1", "col2"])
...: .agg({
...:     "col3": ["mean", "count"],
...:     "col4": ["median", "min", "count"]
...: }))
Out[5]:
col4                  col3
median   min count      mean count
col1 col2
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1
``````

The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis.

To gain more control over the output I usually split the statistics into individual aggregations that I then combine using `join`. It looks like this:

``````In [6]: gb = df.groupby(["col1", "col2"])
...: counts = gb.size().to_frame(name="counts")
...: (counts
...:  .join(gb.agg({"col3": "mean"}).rename(columns={"col3": "col3_mean"}))
...:  .join(gb.agg({"col4": "median"}).rename(columns={"col4": "col4_median"}))
...:  .join(gb.agg({"col4": "min"}).rename(columns={"col4": "col4_min"}))
...:  .reset_index()
...: )
...:
Out[6]:
col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63
``````

### Footnotes

The code used to generate the test data is shown below:

``````In [1]: import numpy as np
...: import pandas as pd
...:
...: keys = np.array([
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["C", "D"],
...:         ["C", "D"],
...:         ["C", "D"],
...:         ["E", "F"],
...:         ["E", "F"],
...:         ["G", "H"]
...:         ])
...:
...: df = pd.DataFrame(
...:     np.hstack([keys,np.random.randn(10,4).round(2)]),
...:     columns = ["col1", "col2", "col3", "col4", "col5", "col6"]
...: )
...:
...: df[["col3", "col4", "col5", "col6"]] =
...:     df[["col3", "col4", "col5", "col6"]].astype(float)
...:
``````

Disclaimer:

If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop `NaN` entries in the mean calculation without telling you about it.

The idiomatic way to do this with Pandas is to use the `.sample` method of your dataframe to sample all rows without replacement:

``````df.sample(frac=1)
``````

The `frac` keyword argument specifies the fraction of rows to return in the random sample, so `frac=1` means return all rows (in random order).

Note: If you wish to shuffle your dataframe in-place and reset the index, you could do e.g.

``````df = df.sample(frac=1).reset_index(drop=True)
``````

Here, specifying `drop=True` prevents `.reset_index` from creating a column containing the old index entries.

Follow-up note: Although it may not look like the above operation is in-place, python/pandas is smart enough not to do another malloc for the shuffled object. That is, even though the reference object has changed (by which I mean `id(df_old)` is not the same as `id(df_new)`), the underlying C object is still the same. To show that this is indeed the case, you could run a simple memory profiler:

``````\$ python3 -m memory_profiler .	est.py
Filename: .	est.py

Line #    Mem usage    Increment   Line Contents
================================================
5     68.5 MiB     68.5 MiB   @profile
6                             def shuffle():
7    847.8 MiB    779.3 MiB       df = pd.DataFrame(np.random.randn(100, 1000000))
8    847.9 MiB      0.1 MiB       df = df.sample(frac=1).reset_index(drop=True)

``````

## Placing the legend (`bbox_to_anchor`)

A legend is positioned inside the bounding box of the axes using the `loc` argument to `plt.legend`.
E.g. `loc="upper right"` places the legend in the upper right corner of the bounding box, which by default extents from `(0,0)` to `(1,1)` in axes coordinates (or in bounding box notation `(x0,y0, width, height)=(0,0,1,1)`).

To place the legend outside of the axes bounding box, one may specify a tuple `(x0,y0)` of axes coordinates of the lower left corner of the legend.

``````plt.legend(loc=(1.04,0))
``````

A more versatile approach is to manually specify the bounding box into which the legend should be placed, using the `bbox_to_anchor` argument. One can restrict oneself to supply only the `(x0, y0)` part of the bbox. This creates a zero span box, out of which the legend will expand in the direction given by the `loc` argument. E.g.

`plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")`

places the legend outside the axes, such that the upper left corner of the legend is at position `(1.04,1)` in axes coordinates.

Further examples are given below, where additionally the interplay between different arguments like `mode` and `ncols` are shown.

``````l1 = plt.legend(bbox_to_anchor=(1.04,1), borderaxespad=0)
l2 = plt.legend(bbox_to_anchor=(1.04,0), loc="lower left", borderaxespad=0)
l3 = plt.legend(bbox_to_anchor=(1.04,0.5), loc="center left", borderaxespad=0)
l4 = plt.legend(bbox_to_anchor=(0,1.02,1,0.2), loc="lower left",
l5 = plt.legend(bbox_to_anchor=(1,0), loc="lower right",
bbox_transform=fig.transFigure, ncol=3)
l6 = plt.legend(bbox_to_anchor=(0.4,0.8), loc="upper right")
``````

Details about how to interpret the 4-tuple argument to `bbox_to_anchor`, as in `l4`, can be found in this question. The `mode="expand"` expands the legend horizontally inside the bounding box given by the 4-tuple. For a vertically expanded legend, see this question.

Sometimes it may be useful to specify the bounding box in figure coordinates instead of axes coordinates. This is shown in the example `l5` from above, where the `bbox_transform` argument is used to put the legend in the lower left corner of the figure.

### Postprocessing

Having placed the legend outside the axes often leads to the undesired situation that it is completely or partially outside the figure canvas.

Solutions to this problem are:

One can adjust the subplot parameters such, that the axes take less space inside the figure (and thereby leave more space to the legend) by using `plt.subplots_adjust`. E.g.

``````  plt.subplots_adjust(right=0.7)
``````

leaves 30% space on the right-hand side of the figure, where one could place the legend.

• Tight layout
Using `plt.tight_layout` Allows to automatically adjust the subplot parameters such that the elements in the figure sit tight against the figure edges. Unfortunately, the legend is not taken into account in this automatism, but we can supply a rectangle box that the whole subplots area (including labels) will fit into.

``````  plt.tight_layout(rect=[0,0,0.75,1])
``````
• Saving the figure with `bbox_inches = "tight"`
The argument `bbox_inches = "tight"` to `plt.savefig` can be used to save the figure such that all artist on the canvas (including the legend) are fit into the saved area. If needed, the figure size is automatically adjusted.

``````  plt.savefig("output.png", bbox_inches="tight")
``````
• automatically adjusting the subplot params
A way to automatically adjust the subplot position such that the legend fits inside the canvas without changing the figure size can be found in this answer: Creating figure with exact size and no padding (and legend outside the axes)

Comparison between the cases discussed above:

## Alternatives

A figure legend

One may use a legend to the figure instead of the axes, `matplotlib.figure.Figure.legend`. This has become especially useful for matplotlib version >=2.1, where no special arguments are needed

``````fig.legend(loc=7)
``````

to create a legend for all artists in the different axes of the figure. The legend is placed using the `loc` argument, similar to how it is placed inside an axes, but in reference to the whole figure - hence it will be outside the axes somewhat automatically. What remains is to adjust the subplots such that there is no overlap between the legend and the axes. Here the point "Adjust the subplot parameters" from above will be helpful. An example:

``````import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0,2*np.pi)
colors=["#7aa0c4","#ca82e1" ,"#8bcd50","#e18882"]
fig, axes = plt.subplots(ncols=2)
for i in range(4):
axes[i//2].plot(x,np.sin(x+i), color=colors[i],label="y=sin(x+{})".format(i))

fig.legend(loc=7)
fig.tight_layout()
plt.show()
``````

Legend inside dedicated subplot axes

An alternative to using `bbox_to_anchor` would be to place the legend in its dedicated subplot axes (`lax`). Since the legend subplot should be smaller than the plot, we may use `gridspec_kw={"width_ratios":[4,1]}` at axes creation. We can hide the axes `lax.axis("off")` but still put a legend in. The legend handles and labels need to obtained from the real plot via `h,l = ax.get_legend_handles_labels()`, and can then be supplied to the legend in the `lax` subplot, `lax.legend(h,l)`. A complete example is below.

``````import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = 6,2

fig, (ax,lax) = plt.subplots(ncols=2, gridspec_kw={"width_ratios":[4,1]})
ax.plot(x,y, label="y=sin(x)")
....

h,l = ax.get_legend_handles_labels()
lax.axis("off")

plt.tight_layout()
plt.show()
``````

This produces a plot, which is visually pretty similar to the plot from above:

We could also use the first axes to place the legend, but use the `bbox_transform` of the legend axes,

``````ax.legend(bbox_to_anchor=(0,0,1,1), bbox_transform=lax.transAxes)
lax.axis("off")
``````

In this approach, we do not need to obtain the legend handles externally, but we need to specify the `bbox_to_anchor` argument.

• Consider the matplotlib legend guide with some examples of other stuff you want to do with legends.
• Some example code for placing legends for pie charts may directly be found in answer to this question: Python - Legend overlaps with the pie chart
• The `loc` argument can take numbers instead of strings, which make calls shorter, however, they are not very intuitively mapped to each other. Here is the mapping for reference:

The fundamental misunderstanding here is in thinking that `range` is a generator. It"s not. In fact, it"s not any kind of iterator.

You can tell this pretty easily:

``````>>> a = range(5)
>>> print(list(a))
[0, 1, 2, 3, 4]
>>> print(list(a))
[0, 1, 2, 3, 4]
``````

If it were a generator, iterating it once would exhaust it:

``````>>> b = my_crappy_range(5)
>>> print(list(b))
[0, 1, 2, 3, 4]
>>> print(list(b))
[]
``````

What `range` actually is, is a sequence, just like a list. You can even test this:

``````>>> import collections.abc
>>> isinstance(a, collections.abc.Sequence)
True
``````

This means it has to follow all the rules of being a sequence:

``````>>> a[3]         # indexable
3
>>> len(a)       # sized
5
>>> 3 in a       # membership
True
>>> reversed(a)  # reversible
<range_iterator at 0x101cd2360>
>>> a.index(3)   # implements "index"
3
>>> a.count(3)   # implements "count"
1
``````

The difference between a `range` and a `list` is that a `range` is a lazy or dynamic sequence; it doesn"t remember all of its values, it just remembers its `start`, `stop`, and `step`, and creates the values on demand on `__getitem__`.

(As a side note, if you `print(iter(a))`, you"ll notice that `range` uses the same `listiterator` type as `list`. How does that work? A `listiterator` doesn"t use anything special about `list` except for the fact that it provides a C implementation of `__getitem__`, so it works fine for `range` too.)

Now, there"s nothing that says that `Sequence.__contains__` has to be constant time‚Äîin fact, for obvious examples of sequences like `list`, it isn"t. But there"s nothing that says it can"t be. And it"s easier to implement `range.__contains__` to just check it mathematically (`(val - start) % step`, but with some extra complexity to deal with negative steps) than to actually generate and test all the values, so why shouldn"t it do it the better way?

But there doesn"t seem to be anything in the language that guarantees this will happen. As Ashwini Chaudhari points out, if you give it a non-integral value, instead of converting to integer and doing the mathematical test, it will fall back to iterating all the values and comparing them one by one. And just because CPython 3.2+ and PyPy 3.x versions happen to contain this optimization, and it"s an obvious good idea and easy to do, there"s no reason that IronPython or NewKickAssPython 3.x couldn"t leave it out. (And in fact, CPython 3.0-3.1 didn"t include it.)

If `range` actually were a generator, like `my_crappy_range`, then it wouldn"t make sense to test `__contains__` this way, or at least the way it makes sense wouldn"t be obvious. If you"d already iterated the first 3 values, is `1` still `in` the generator? Should testing for `1` cause it to iterate and consume all the values up to `1` (or up to the first value `>= 1`)?

# Using a for loop, how do I access the loop index, from 1 to 5 in this case?

Use `enumerate` to get the index with the element as you iterate:

``````for index, item in enumerate(items):
print(index, item)
``````

And note that Python"s indexes start at zero, so you would get 0 to 4 with the above. If you want the count, 1 to 5, do this:

``````count = 0 # in case items is empty and you need it after the loop
for count, item in enumerate(items, start=1):
print(count, item)
``````

# Unidiomatic control flow

What you are asking for is the Pythonic equivalent of the following, which is the algorithm most programmers of lower-level languages would use:

``````index = 0            # Python"s indexing starts at zero
for item in items:   # Python"s for loops are a "for each" loop
print(index, item)
index += 1
``````

Or in languages that do not have a for-each loop:

``````index = 0
while index < len(items):
print(index, items[index])
index += 1
``````

or sometimes more commonly (but unidiomatically) found in Python:

``````for index in range(len(items)):
print(index, items[index])
``````

# Use the Enumerate Function

Python"s `enumerate` function reduces the visual clutter by hiding the accounting for the indexes, and encapsulating the iterable into another iterable (an `enumerate` object) that yields a two-item tuple of the index and the item that the original iterable would provide. That looks like this:

``````for index, item in enumerate(items, start=0):   # default is zero
print(index, item)
``````

This code sample is fairly well the canonical example of the difference between code that is idiomatic of Python and code that is not. Idiomatic code is sophisticated (but not complicated) Python, written in the way that it was intended to be used. Idiomatic code is expected by the designers of the language, which means that usually this code is not just more readable, but also more efficient.

## Getting a count

Even if you don"t need indexes as you go, but you need a count of the iterations (sometimes desirable) you can start with `1` and the final number will be your count.

``````count = 0 # in case items is empty
for count, item in enumerate(items, start=1):   # default is zero
print(item)

print("there were {0} items printed".format(count))
``````

The count seems to be more what you intend to ask for (as opposed to index) when you said you wanted from 1 to 5.

## Breaking it down - a step by step explanation

To break these examples down, say we have a list of items that we want to iterate over with an index:

``````items = ["a", "b", "c", "d", "e"]
``````

Now we pass this iterable to enumerate, creating an enumerate object:

``````enumerate_object = enumerate(items) # the enumerate object
``````

We can pull the first item out of this iterable that we would get in a loop with the `next` function:

``````iteration = next(enumerate_object) # first iteration from enumerate
print(iteration)
``````

And we see we get a tuple of `0`, the first index, and `"a"`, the first item:

``````(0, "a")
``````

we can use what is referred to as "sequence unpacking" to extract the elements from this two-tuple:

``````index, item = iteration
#   0,  "a" = (0, "a") # essentially this.
``````

and when we inspect `index`, we find it refers to the first index, 0, and `item` refers to the first item, `"a"`.

``````>>> print(index)
0
>>> print(item)
a
``````

# Conclusion

• Python indexes start at zero
• To get these indexes from an iterable as you iterate over it, use the enumerate function
• Using enumerate in the idiomatic way (along with tuple unpacking) creates code that is more readable and maintainable:

So do this:

``````for index, item in enumerate(items, start=0):   # Python indexes start at zero
print(index, item)
``````

## Finding median of list in Python

How do you find the median of a list in Python? The list can be of any size and the numbers are not guaranteed to be in any particular order.

If the list contains an even number of elements, the function should return the average of the middle two.

Here are some examples (sorted for display purposes):

``````median([1]) == 1
median([1, 1]) == 1
median([1, 1, 2, 4]) == 1.5
median([0, 2, 5, 6, 8, 9, 9]) == 6
median([0, 0, 0, 0, 4, 4, 6, 8]) == 2
``````

The simplest way to get row counts per group is by calling `.size()`, which returns a `Series`:

``````df.groupby(["col1","col2"]).size()
``````

Usually you want this result as a `DataFrame` (instead of a `Series`) so you can do:

``````df.groupby(["col1", "col2"]).size().reset_index(name="counts")
``````

If you want to find out how to calculate the row counts and other statistics for each group continue reading below.

## Detailed example:

Consider the following example dataframe:

``````In [2]: df
Out[2]:
col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17
``````

First let"s use `.size()` to get the row counts:

``````In [3]: df.groupby(["col1", "col2"]).size()
Out[3]:
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64
``````

Then let"s use `.size().reset_index(name="counts")` to get the row counts:

``````In [4]: df.groupby(["col1", "col2"]).size().reset_index(name="counts")
Out[4]:
col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1
``````

### Including results for more statistics

When you want to calculate statistics on grouped data, it usually looks like this:

``````In [5]: (df
...: .groupby(["col1", "col2"])
...: .agg({
...:     "col3": ["mean", "count"],
...:     "col4": ["median", "min", "count"]
...: }))
Out[5]:
col4                  col3
median   min count      mean count
col1 col2
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1
``````

The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis.

To gain more control over the output I usually split the statistics into individual aggregations that I then combine using `join`. It looks like this:

``````In [6]: gb = df.groupby(["col1", "col2"])
...: counts = gb.size().to_frame(name="counts")
...: (counts
...:  .join(gb.agg({"col3": "mean"}).rename(columns={"col3": "col3_mean"}))
...:  .join(gb.agg({"col4": "median"}).rename(columns={"col4": "col4_median"}))
...:  .join(gb.agg({"col4": "min"}).rename(columns={"col4": "col4_min"}))
...:  .reset_index()
...: )
...:
Out[6]:
col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63
``````

### Footnotes

The code used to generate the test data is shown below:

``````In [1]: import numpy as np
...: import pandas as pd
...:
...: keys = np.array([
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["A", "B"],
...:         ["C", "D"],
...:         ["C", "D"],
...:         ["C", "D"],
...:         ["E", "F"],
...:         ["E", "F"],
...:         ["G", "H"]
...:         ])
...:
...: df = pd.DataFrame(
...:     np.hstack([keys,np.random.randn(10,4).round(2)]),
...:     columns = ["col1", "col2", "col3", "col4", "col5", "col6"]
...: )
...:
...: df[["col3", "col4", "col5", "col6"]] =
...:     df[["col3", "col4", "col5", "col6"]].astype(float)
...:
``````

Disclaimer:

If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop `NaN` entries in the mean calculation without telling you about it.

To begin, note that quantiles is just the most general term for things like percentiles, quartiles, and medians. You specified five bins in your example, so you are asking `qcut` for quintiles.

So, when you ask for quintiles with `qcut`, the bins will be chosen so that you have the same number of records in each bin. You have 30 records, so should have 6 in each bin (your output should look like this, although the breakpoints will differ due to the random draw):

``````pd.qcut(factors, 5).value_counts()

[-2.578, -0.829]    6
(-0.829, -0.36]     6
(-0.36, 0.366]      6
(0.366, 0.868]      6
(0.868, 2.617]      6
``````

Conversely, for `cut` you will see something more uneven:

``````pd.cut(factors, 5).value_counts()

(-2.583, -1.539]    5
(-1.539, -0.5]      5
(-0.5, 0.539]       9
(0.539, 1.578]      9
(1.578, 2.617]      2
``````

That"s because `cut` will choose the bins to be evenly spaced according to the values themselves and not the frequency of those values. Hence, because you drew from a random normal, you"ll see higher frequencies in the inner bins and fewer in the outer. This is essentially going to be a tabular form of a histogram (which you would expect to be fairly bell shaped with 30 records).

Here are some benchmarks for the various answers to this question. There were some surprising results, including wildly different performance depending on the string being tested.

Some functions were modified to work with Python 3 (mainly by replacing `/` with `//` to ensure integer division). If you see something wrong, want to add your function, or want to add another test string, ping @ZeroPiraeus in the Python chatroom.

In summary: there"s about a 50x difference between the best- and worst-performing solutions for the large set of example data supplied by OP here (via this comment). David Zhang"s solution is the clear winner, outperforming all others by around 5x for the large example set.

A couple of the answers are very slow in extremely large "no match" cases. Otherwise, the functions seem to be equally matched or clear winners depending on the test.

Here are the results, including plots made using matplotlib and seaborn to show the different distributions:

Corpus 1 (supplied examples - small set)

``````mean performance:
0.0003  david_zhang
0.0009  zero
0.0013  antti
0.0013  tigerhawk_2
0.0015  carpetpython
0.0029  tigerhawk_1
0.0031  davidism
0.0035  saksham
0.0046  shashank
0.0056  piotr

median performance:
0.0003  david_zhang
0.0008  zero
0.0013  antti
0.0013  tigerhawk_2
0.0014  carpetpython
0.0027  tigerhawk_1
0.0031  davidism
0.0038  saksham
0.0044  shashank
0.0058  piotr
``````

Corpus 2 (supplied examples - large set)

``````mean performance:
0.0006  david_zhang
0.0036  tigerhawk_2
0.0036  antti
0.0037  zero
0.0039  carpetpython
0.0052  shashank
0.0056  piotr
0.0066  davidism
0.0120  tigerhawk_1
0.0283  saksham

median performance:
0.0004  david_zhang
0.0018  zero
0.0022  tigerhawk_2
0.0022  antti
0.0024  carpetpython
0.0043  davidism
0.0049  shashank
0.0055  piotr
0.0061  tigerhawk_1
0.0109  saksham
``````

Corpus 3 (edge cases)

``````mean performance:
0.0123  shashank
0.0375  david_zhang
0.0376  piotr
0.0394  carpetpython
0.0479  antti
0.0488  tigerhawk_2
0.2269  tigerhawk_1
0.2336  davidism
0.7239  saksham
3.6265  zero

median performance:
0.0107  tigerhawk_2
0.0108  antti
0.0109  carpetpython
0.0135  david_zhang
0.0137  tigerhawk_1
0.0150  shashank
0.0229  saksham
0.0255  piotr
0.0721  davidism
0.1080  zero
``````

The tests and raw results are available here.

To understand what `yield` does, you must understand what generators are. And before you can understand generators, you must understand iterables.

## Iterables

When you create a list, you can read its items one by one. Reading its items one by one is called iteration:

``````>>> mylist = [1, 2, 3]
>>> for i in mylist:
...    print(i)
1
2
3
``````

`mylist` is an iterable. When you use a list comprehension, you create a list, and so an iterable:

``````>>> mylist = [x*x for x in range(3)]
>>> for i in mylist:
...    print(i)
0
1
4
``````

Everything you can use "`for... in...`" on is an iterable; `lists`, `strings`, files...

These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.

## Generators

Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly:

``````>>> mygenerator = (x*x for x in range(3))
>>> for i in mygenerator:
...    print(i)
0
1
4
``````

It is just the same except you used `()` instead of `[]`. BUT, you cannot perform `for i in mygenerator` a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.

## Yield

`yield` is a keyword that is used like `return`, except the function will return a generator.

``````>>> def create_generator():
...    mylist = range(3)
...    for i in mylist:
...        yield i*i
...
>>> mygenerator = create_generator() # create a generator
>>> print(mygenerator) # mygenerator is an object!
<generator object create_generator at 0xb7555c34>
>>> for i in mygenerator:
...     print(i)
0
1
4
``````

Here it"s a useless example, but it"s handy when you know your function will return a huge set of values that you will only need to read once.

To master `yield`, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky.

Then, your code will continue from where it left off each time `for` uses the generator.

Now the hard part:

The first time the `for` calls the generator object created from your function, it will run the code in your function from the beginning until it hits `yield`, then it"ll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting `yield`. That can be because the loop has come to an end, or because you no longer satisfy an `"if/else"`.

Generator:

``````# Here you create the method of the node object that will return the generator
def _get_child_candidates(self, distance, min_dist, max_dist):

# Here is the code that will be called each time you use the generator object:

# If there is still a child of the node object on its left
# AND if the distance is ok, return the next child
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild

# If there is still a child of the node object on its right
# AND if the distance is ok, return the next child
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild

# If the function arrives here, the generator will be considered empty
# there is no more than two values: the left and the right children
``````

Caller:

``````# Create an empty list and a list with the current object reference
result, candidates = list(), [self]

# Loop on candidates (they contain only one element at the beginning)
while candidates:

# Get the last candidate and remove it from the list
node = candidates.pop()

# Get the distance between obj and the candidate
distance = node._get_dist(obj)

# If distance is ok, then you can fill the result
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)

# Add the children of the candidate in the candidate"s list
# so the loop will keep running until it will have looked
# at all the children of the children of the children, etc. of the candidate
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))

return result
``````

This code contains several smart parts:

• The loop iterates on a list, but the list expands while the loop is being iterated. It"s a concise way to go through all these nested data even if it"s a bit dangerous since you can end up with an infinite loop. In this case, `candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))` exhaust all the values of the generator, but `while` keeps creating new generator objects which will produce different values from the previous ones since it"s not applied on the same node.

• The `extend()` method is a list object method that expects an iterable and adds its values to the list.

Usually we pass a list to it:

``````>>> a = [1, 2]
>>> b = [3, 4]
>>> a.extend(b)
>>> print(a)
[1, 2, 3, 4]
``````

But in your code, it gets a generator, which is good because:

1. You don"t need to read the values twice.
2. You may have a lot of children and you don"t want them all stored in memory.

And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question...

You can stop here, or read a little bit to see an advanced use of a generator:

## Controlling a generator exhaustion

``````>>> class Bank(): # Let"s create a bank, building ATMs
...    crisis = False
...    def create_atm(self):
...        while not self.crisis:
...            yield "\$100"
>>> hsbc = Bank() # When everything"s ok the ATM gives you as much as you want
>>> corner_street_atm = hsbc.create_atm()
>>> print(corner_street_atm.next())
\$100
>>> print(corner_street_atm.next())
\$100
>>> print([corner_street_atm.next() for cash in range(5)])
["\$100", "\$100", "\$100", "\$100", "\$100"]
>>> hsbc.crisis = True # Crisis is coming, no more money!
>>> print(corner_street_atm.next())
<type "exceptions.StopIteration">
>>> wall_street_atm = hsbc.create_atm() # It"s even true for new ATMs
>>> print(wall_street_atm.next())
<type "exceptions.StopIteration">
>>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty
>>> print(corner_street_atm.next())
<type "exceptions.StopIteration">
>>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business
>>> for cash in brand_new_atm:
...    print cash
\$100
\$100
\$100
\$100
\$100
\$100
\$100
\$100
\$100
...
``````

Note: For Python 3, use`print(corner_street_atm.__next__())` or `print(next(corner_street_atm))`

It can be useful for various things like controlling access to a resource.

The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one-liner? `Map / Zip` without creating another list?

Then just `import itertools`.

An example? Let"s see the possible orders of arrival for a four-horse race:

``````>>> horses = [1, 2, 3, 4]
>>> races = itertools.permutations(horses)
>>> print(races)
<itertools.permutations object at 0xb754f1dc>
>>> print(list(itertools.permutations(horses)))
[(1, 2, 3, 4),
(1, 2, 4, 3),
(1, 3, 2, 4),
(1, 3, 4, 2),
(1, 4, 2, 3),
(1, 4, 3, 2),
(2, 1, 3, 4),
(2, 1, 4, 3),
(2, 3, 1, 4),
(2, 3, 4, 1),
(2, 4, 1, 3),
(2, 4, 3, 1),
(3, 1, 2, 4),
(3, 1, 4, 2),
(3, 2, 1, 4),
(3, 2, 4, 1),
(3, 4, 1, 2),
(3, 4, 2, 1),
(4, 1, 2, 3),
(4, 1, 3, 2),
(4, 2, 1, 3),
(4, 2, 3, 1),
(4, 3, 1, 2),
(4, 3, 2, 1)]
``````

## Understanding the inner mechanisms of iteration

Iteration is a process implying iterables (implementing the `__iter__()` method) and iterators (implementing the `__next__()` method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.

There is more about it in this article about how `for` loops work.

You might be interested in the SciPy Stats package. It has the percentile function you"re after and many other statistical goodies.

`percentile()` is available in `numpy` too.

``````import numpy as np
a = np.array([1,2,3,4,5])
p = np.percentile(a, 50) # return 50th percentile, e.g median.
print p
3.0
``````

This ticket leads me to believe they won"t be integrating `percentile()` into numpy anytime soon.

Python 3.4 has `statistics.median`:

Return the median (middle value) of numeric data.

When the number of data points is odd, return the middle data point. When the number of data points is even, the median is interpolated by taking the average of the two middle values:

``````>>> median([1, 3, 5])
3
>>> median([1, 3, 5, 7])
4.0
``````

Usage:

``````import statistics

items = [6, 1, 8, 2, 3]

statistics.median(items)
#>>> 3
``````

It"s pretty careful with types, too:

``````statistics.median(map(float, items))
#>>> 3.0

from decimal import Decimal
statistics.median(map(Decimal, items))
#>>> Decimal("3")
``````

Something important when dealing with outliers is that one should try to use estimators as robust as possible. The mean of a distribution will be biased by outliers but e.g. the median will be much less.

``````def reject_outliers(data, m = 2.):
d = np.abs(data - np.median(data))
mdev = np.median(d)
s = d/mdev if mdev else 0.
return data[s<m]
``````

Here I have replace the mean with the more robust median and the standard deviation with the median absolute distance to the median. I then scaled the distances by their (again) median value so that `m` is on a reasonable relative scale.

Note that for the `data[s<m]` syntax to work, `data` must be a numpy array.

``````>>> k = [[1, 2], [4], [5, 6, 2], [1, 2], [3], [4]]
>>> import itertools
>>> k.sort()
>>> list(k for k,_ in itertools.groupby(k))
[[1, 2], [3], [4], [5, 6, 2]]
``````

`itertools` often offers the fastest and most powerful solutions to this kind of problems, and is well worth getting intimately familiar with!-)

Edit: as I mention in a comment, normal optimization efforts are focused on large inputs (the big-O approach) because it"s so much easier that it offers good returns on efforts. But sometimes (essentially for "tragically crucial bottlenecks" in deep inner loops of code that"s pushing the boundaries of performance limits) one may need to go into much more detail, providing probability distributions, deciding which performance measures to optimize (maybe the upper bound or the 90th centile is more important than an average or median, depending on one"s apps), performing possibly-heuristic checks at the start to pick different algorithms depending on input data characteristics, and so forth.

Careful measurements of "point" performance (code A vs code B for a specific input) are a part of this extremely costly process, and standard library module `timeit` helps here. However, it"s easier to use it at a shell prompt. For example, here"s a short module to showcase the general approach for this problem, save it as `nodup.py`:

``````import itertools

k = [[1, 2], [4], [5, 6, 2], [1, 2], [3], [4]]

def doset(k, map=map, list=list, set=set, tuple=tuple):
return map(list, set(map(tuple, k)))

def dosort(k, sorted=sorted, xrange=xrange, len=len):
ks = sorted(k)
return [ks[i] for i in xrange(len(ks)) if i == 0 or ks[i] != ks[i-1]]

def dogroupby(k, sorted=sorted, groupby=itertools.groupby, list=list):
ks = sorted(k)
return [i for i, _ in itertools.groupby(ks)]

def donewk(k):
newk = []
for i in k:
if i not in newk:
newk.append(i)
return newk

# sanity check that all functions compute the same result and don"t alter k
if __name__ == "__main__":
savek = list(k)
for f in doset, dosort, dogroupby, donewk:
resk = f(k)
assert k == savek
print "%10s %s" % (f.__name__, sorted(resk))
``````

Note the sanity check (performed when you just do `python nodup.py`) and the basic hoisting technique (make constant global names local to each function for speed) to put things on equal footing.

Now we can run checks on the tiny example list:

``````\$ python -mtimeit -s"import nodup" "nodup.doset(nodup.k)"
100000 loops, best of 3: 11.7 usec per loop
\$ python -mtimeit -s"import nodup" "nodup.dosort(nodup.k)"
100000 loops, best of 3: 9.68 usec per loop
\$ python -mtimeit -s"import nodup" "nodup.dogroupby(nodup.k)"
100000 loops, best of 3: 8.74 usec per loop
\$ python -mtimeit -s"import nodup" "nodup.donewk(nodup.k)"
100000 loops, best of 3: 4.44 usec per loop
``````

confirming that the quadratic approach has small-enough constants to make it attractive for tiny lists with few duplicated values. With a short list without duplicates:

``````\$ python -mtimeit -s"import nodup" "nodup.donewk([[i] for i in range(12)])"
10000 loops, best of 3: 25.4 usec per loop
\$ python -mtimeit -s"import nodup" "nodup.dogroupby([[i] for i in range(12)])"
10000 loops, best of 3: 23.7 usec per loop
\$ python -mtimeit -s"import nodup" "nodup.doset([[i] for i in range(12)])"
10000 loops, best of 3: 31.3 usec per loop
\$ python -mtimeit -s"import nodup" "nodup.dosort([[i] for i in range(12)])"
10000 loops, best of 3: 25 usec per loop
``````

the quadratic approach isn"t bad, but the sort and groupby ones are better. Etc, etc.

If (as the obsession with performance suggests) this operation is at a core inner loop of your pushing-the-boundaries application, it"s worth trying the same set of tests on other representative input samples, possibly detecting some simple measure that could heuristically let you pick one or the other approach (but the measure must be fast, of course).

It"s also well worth considering keeping a different representation for `k` -- why does it have to be a list of lists rather than a set of tuples in the first place? If the duplicate removal task is frequent, and profiling shows it to be the program"s performance bottleneck, keeping a set of tuples all the time and getting a list of lists from it only if and where needed, might be faster overall, for example.

(Works with ):

``````def median(lst):
n = len(lst)
s = sorted(lst)
return (sum(s[n//2-1:n//2+1])/2.0, s[n//2])[n % 2] if n else None
``````

``````>>> median([-5, -5, -3, -4, 0, -1])
-3.5
``````

``````>>> from numpy import median
>>> median([1, -4, -1, -1, 1, -3])
-1.0
``````

``````>>> from statistics import median
>>> median([5, 2, 3, 8, 9, -2])
4.0
``````

Levenshtein Python extension and C library.

https://github.com/ztane/python-Levenshtein/

The Levenshtein Python C extension module contains functions for fast computation of - Levenshtein (edit) distance, and edit operations - string similarity - approximate median strings, and generally string averaging - string sequence and set similarity It supports both normal and Unicode strings.

``````\$ pip install python-levenshtein
...
\$ python
>>> import Levenshtein
>>> help(Levenshtein.ratio)
ratio(...)
Compute similarity of two strings.

ratio(string1, string2)

The similarity is a number between 0 and 1, it"s usually equal or
somewhat higher than difflib.SequenceMatcher.ratio(), becuase it"s
based on real minimal edit distance.

Examples:
>>> ratio("Hello world!", "Holly grail!")
0.58333333333333337
>>> ratio("Brian", "Jesus")
0.0

>>> help(Levenshtein.distance)
distance(...)
Compute absolute Levenshtein distance of two strings.

distance(string1, string2)

Examples (it"s hard to spell Levenshtein correctly):
>>> distance("Levenshtein", "Lenvinsten")
4
>>> distance("Levenshtein", "Levensthein")
2
>>> distance("Levenshtein", "Levenshten")
1
>>> distance("Levenshtein", "Levenshtein")
0
``````