Rename multiple files with Python

_files | File handling | Loops | Python Methods and Functions | rename | String Variables

In Python3, the rename () method is used to rename a file or directory. This method is part of the

Now you want to rename them in an orderly way, for example hostel1, hostel2, ... and so on. Doing this manually would be a tedious task, but this goal can be achieved with the rename () and listdir () methods in the os module.

Method listdir lists all the contents of the given directory.

Syntax for listdir ():

list = os .listdir ('Src'): Where Src is the source to be listed out.

The following code will do the job for us. It loops through the lists of all images in the xyz folder, determines the destination (dst) and source (src) addresses, and renames using the rename module.

Below is the implementation:

# Pythono3 code to rename multiple
# files in a directory or folder

  
# import of the os module

import os

 
# Function for renaming multiple files

def main ():

i = 0

 

for filename in os.listdir ( "xyz" ):

dst = "Hostel" + str ( i) + ". jpg"

  src = 'xyz' + filename

dst = 'xyz' + dst

 

# rename () function

# rename all files

os.rename (src, dst)

i + = 1

 
Driver code

if __ name__ = = '__main__' :

 

# Calling the main () function

main ()

Output:

The output of this code will look something like this:

Note. This code may not work in the online IDE because it uses external directory with image files.





Rename multiple files with Python: StackOverflow Questions

How do I list all files of a directory?

How can I list all files of a directory in Python and add them to a list?

Importing files from different folder

I have the following folder structure.

application
├── app
│   └── folder
│       └── file.py
└── app2
    └── some_folder
        └── some_file.py

I want to import some functions from file.py in some_file.py.

I"ve tried

from application.app.folder.file import func_name

and some other various attempts but so far I couldn"t manage to import properly. How can I do this?

If Python is interpreted, what are .pyc files?

I"ve been given to understand that Python is an interpreted language...
However, when I look at my Python source code I see .pyc files, which Windows identifies as "Compiled Python Files".

Where do these come in?

Find all files in a directory with extension .txt in Python

How can I find all the files in a directory having the extension .txt in python?

How to import other Python files?

How do I import other files in Python?

  1. How exactly can I import a specific python file like import file.py?
  2. How can I import a folder instead of a specific file?
  3. I want to load a Python file dynamically at runtime, based on user input.
  4. I want to know how to load just one specific part from the file.

For example, in main.py I have:

from extra import * 

Although this gives me all the definitions in extra.py, when maybe all I want is a single definition:

def gap():
    print
    print

What do I add to the import statement to just get gap from extra.py?

How to use glob() to find files recursively?

This is what I have:

glob(os.path.join("src","*.c"))

but I want to search the subfolders of src. Something like this would work:

glob(os.path.join("src","*.c"))
glob(os.path.join("src","*","*.c"))
glob(os.path.join("src","*","*","*.c"))
glob(os.path.join("src","*","*","*","*.c"))

But this is obviously limited and clunky.

How can I open multiple files using "with open" in Python?

I want to change a couple of files at one time, iff I can write to all of them. I"m wondering if I somehow can combine the multiple open calls with the with statement:

try:
  with open("a", "w") as a and open("b", "w") as b:
    do_something()
except IOError as e:
  print "Operation failed: %s" % e.strerror

If that"s not possible, what would an elegant solution to this problem look like?

How can I iterate over files in a given directory?

I need to iterate through all .asm files inside a given directory and do some actions on them.

How can this be done in a efficient way?

How to serve static files in Flask

So this is embarrassing. I"ve got an application that I threw together in Flask and for now it is just serving up a single static HTML page with some links to CSS and JS. And I can"t find where in the documentation Flask describes returning static files. Yes, I could use render_template but I know the data is not templatized. I"d have thought send_file or url_for was the right thing, but I could not get those to work. In the meantime, I am opening the files, reading content, and rigging up a Response with appropriate mimetype:

import os.path

from flask import Flask, Response


app = Flask(__name__)
app.config.from_object(__name__)


def root_dir():  # pragma: no cover
    return os.path.abspath(os.path.dirname(__file__))


def get_file(filename):  # pragma: no cover
    try:
        src = os.path.join(root_dir(), filename)
        # Figure out how flask returns static files
        # Tried:
        # - render_template
        # - send_file
        # This should not be so non-obvious
        return open(src).read()
    except IOError as exc:
        return str(exc)


@app.route("/", methods=["GET"])
def metrics():  # pragma: no cover
    content = get_file("jenkins_analytics.html")
    return Response(content, mimetype="text/html")


@app.route("/", defaults={"path": ""})
@app.route("/<path:path>")
def get_resource(path):  # pragma: no cover
    mimetypes = {
        ".css": "text/css",
        ".html": "text/html",
        ".js": "application/javascript",
    }
    complete_path = os.path.join(root_dir(), path)
    ext = os.path.splitext(path)[1]
    mimetype = mimetypes.get(ext, "text/html")
    content = get_file(complete_path)
    return Response(content, mimetype=mimetype)


if __name__ == "__main__":  # pragma: no cover
    app.run(port=80)

Someone want to give a code sample or url for this? I know this is going to be dead simple.

Unzipping files in Python

I read through the zipfile documentation, but couldn"t understand how to unzip a file, only how to zip a file. How do I unzip all the contents of a zip file into the same directory?

Answer #1

Recommendation for beginners:

This is my personal recommendation for beginners: start by learning virtualenv and pip, tools which work with both Python 2 and 3 and in a variety of situations, and pick up other tools once you start needing them.

PyPI packages not in the standard library:

  • virtualenv is a very popular tool that creates isolated Python environments for Python libraries. If you"re not familiar with this tool, I highly recommend learning it, as it is a very useful tool, and I"ll be making comparisons to it for the rest of this answer.

It works by installing a bunch of files in a directory (eg: env/), and then modifying the PATH environment variable to prefix it with a custom bin directory (eg: env/bin/). An exact copy of the python or python3 binary is placed in this directory, but Python is programmed to look for libraries relative to its path first, in the environment directory. It"s not part of Python"s standard library, but is officially blessed by the PyPA (Python Packaging Authority). Once activated, you can install packages in the virtual environment using pip.

  • pyenv is used to isolate Python versions. For example, you may want to test your code against Python 2.7, 3.6, 3.7 and 3.8, so you"ll need a way to switch between them. Once activated, it prefixes the PATH environment variable with ~/.pyenv/shims, where there are special files matching the Python commands (python, pip). These are not copies of the Python-shipped commands; they are special scripts that decide on the fly which version of Python to run based on the PYENV_VERSION environment variable, or the .python-version file, or the ~/.pyenv/version file. pyenv also makes the process of downloading and installing multiple Python versions easier, using the command pyenv install.

  • pyenv-virtualenv is a plugin for pyenv by the same author as pyenv, to allow you to use pyenv and virtualenv at the same time conveniently. However, if you"re using Python 3.3 or later, pyenv-virtualenv will try to run python -m venv if it is available, instead of virtualenv. You can use virtualenv and pyenv together without pyenv-virtualenv, if you don"t want the convenience features.

  • virtualenvwrapper is a set of extensions to virtualenv (see docs). It gives you commands like mkvirtualenv, lssitepackages, and especially workon for switching between different virtualenv directories. This tool is especially useful if you want multiple virtualenv directories.

  • pyenv-virtualenvwrapper is a plugin for pyenv by the same author as pyenv, to conveniently integrate virtualenvwrapper into pyenv.

  • pipenv aims to combine Pipfile, pip and virtualenv into one command on the command-line. The virtualenv directory typically gets placed in ~/.local/share/virtualenvs/XXX, with XXX being a hash of the path of the project directory. This is different from virtualenv, where the directory is typically in the current working directory. pipenv is meant to be used when developing Python applications (as opposed to libraries). There are alternatives to pipenv, such as poetry, which I won"t list here since this question is only about the packages that are similarly named.

Standard library:

  • pyvenv (not to be confused with pyenv in the previous section) is a script shipped with Python 3 but deprecated in Python 3.6 as it had problems (not to mention the confusing name). In Python 3.6+, the exact equivalent is python3 -m venv.

  • venv is a package shipped with Python 3, which you can run using python3 -m venv (although for some reason some distros separate it out into a separate distro package, such as python3-venv on Ubuntu/Debian). It serves the same purpose as virtualenv, but only has a subset of its features (see a comparison here). virtualenv continues to be more popular than venv, especially since the former supports both Python 2 and 3.

Answer #2

os.listdir() - list in the current directory

With listdir in os module you get the files and the folders in the current dir

 import os
 arr = os.listdir()
 print(arr)
 
 >>> ["$RECYCLE.BIN", "work.txt", "3ebooks.txt", "documents"]

Looking in a directory

arr = os.listdir("c:\files")

glob from glob

with glob you can specify a type of file to list like this

import glob

txtfiles = []
for file in glob.glob("*.txt"):
    txtfiles.append(file)

glob in a list comprehension

mylist = [f for f in glob.glob("*.txt")]

get the full path of only files in the current directory

import os
from os import listdir
from os.path import isfile, join

cwd = os.getcwd()
onlyfiles = [os.path.join(cwd, f) for f in os.listdir(cwd) if 
os.path.isfile(os.path.join(cwd, f))]
print(onlyfiles) 

["G:\getfilesname\getfilesname.py", "G:\getfilesname\example.txt"]

Getting the full path name with os.path.abspath

You get the full path in return

 import os
 files_path = [os.path.abspath(x) for x in os.listdir()]
 print(files_path)
 
 ["F:\documentiapplications.txt", "F:\documenticollections.txt"]

Walk: going through sub directories

os.walk returns the root, the directories list and the files list, that is why I unpacked them in r, d, f in the for loop; it, then, looks for other files and directories in the subfolders of the root and so on until there are no subfolders.

import os

# Getting the current work directory (cwd)
thisdir = os.getcwd()

# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
    for file in f:
        if file.endswith(".docx"):
            print(os.path.join(r, file))

os.listdir(): get files in the current directory (Python 2)

In Python 2, if you want the list of the files in the current directory, you have to give the argument as "." or os.getcwd() in the os.listdir method.

 import os
 arr = os.listdir(".")
 print(arr)
 
 >>> ["$RECYCLE.BIN", "work.txt", "3ebooks.txt", "documents"]

To go up in the directory tree

# Method 1
x = os.listdir("..")

# Method 2
x= os.listdir("/")

Get files: os.listdir() in a particular directory (Python 2 and 3)

 import os
 arr = os.listdir("F:\python")
 print(arr)
 
 >>> ["$RECYCLE.BIN", "work.txt", "3ebooks.txt", "documents"]

Get files of a particular subdirectory with os.listdir()

import os

x = os.listdir("./content")

os.walk(".") - current directory

 import os
 arr = next(os.walk("."))[2]
 print(arr)
 
 >>> ["5bs_Turismo1.pdf", "5bs_Turismo1.pptx", "esperienza.txt"]

next(os.walk(".")) and os.path.join("dir", "file")

 import os
 arr = []
 for d,r,f in next(os.walk("F:\_python")):
     for file in f:
         arr.append(os.path.join(r,file))

 for f in arr:
     print(files)

>>> F:\_python\dict_class.py
>>> F:\_python\programmi.txt

next(os.walk("F:\") - get the full path - list comprehension

 [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]
 
 >>> ["F:\_python\dict_class.py", "F:\_python\programmi.txt"]

os.walk - get full path - all files in sub dirs**

x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
print(x)

>>> ["F:\_python\dict.py", "F:\_python\progr.txt", "F:\_python\readl.py"]

os.listdir() - get only txt files

 arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
 print(arr_txt)
 
 >>> ["work.txt", "3ebooks.txt"]

Using glob to get the full path of the files

If I should need the absolute path of the files:

from path import path
from glob import glob
x = [path(f).abspath() for f in glob("F:\*.txt")]
for f in x:
    print(f)

>>> F:acquistionline.txt
>>> F:acquisti_2018.txt
>>> F:ootstrap_jquery_ecc.txt

Using os.path.isfile to avoid directories in the list

import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)

>>> ["a simple game.py", "data.txt", "decorator.py"]

Using pathlib from Python 3.4

import pathlib

flist = []
for p in pathlib.Path(".").iterdir():
    if p.is_file():
        print(p)
        flist.append(p)

 >>> error.PNG
 >>> exemaker.bat
 >>> guiprova.mp3
 >>> setup.py
 >>> speak_gui2.py
 >>> thumb.PNG

With list comprehension:

flist = [p for p in pathlib.Path(".").iterdir() if p.is_file()]

Alternatively, use pathlib.Path() instead of pathlib.Path(".")

Use glob method in pathlib.Path()

import pathlib

py = pathlib.Path().glob("*.py")
for file in py:
    print(file)

>>> stack_overflow_list.py
>>> stack_overflow_list_tkinter.py

Get all and only files with os.walk

import os
x = [i[2] for i in os.walk(".")]
y=[]
for t in x:
    for f in t:
        y.append(f)
print(y)

>>> ["append_to_list.py", "data.txt", "data1.txt", "data2.txt", "data_180617", "os_walk.py", "READ2.py", "read_data.py", "somma_defaltdic.py", "substitute_words.py", "sum_data.py", "data.txt", "data1.txt", "data_180617"]

Get only files with next and walk in a directory

 import os
 x = next(os.walk("F://python"))[2]
 print(x)
 
 >>> ["calculator.bat","calculator.py"]

Get only directories with next and walk in a directory

 import os
 next(os.walk("F://python"))[1] # for the current dir use (".")
 
 >>> ["python3","others"]

Get all the subdir names with walk

for r,d,f in os.walk("F:\_python"):
    for dirs in d:
        print(dirs)

>>> .vscode
>>> pyexcel
>>> pyschool.py
>>> subtitles
>>> _metaprogramming
>>> .ipynb_checkpoints

os.scandir() from Python 3.5 and greater

import os
x = [f.name for f in os.scandir() if f.is_file()]
print(x)

>>> ["calculator.bat","calculator.py"]

# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.

import os
with os.scandir() as i:
    for entry in i:
        if entry.is_file():
            print(entry.name)

>>> ebookmaker.py
>>> error.PNG
>>> exemaker.bat
>>> guiprova.mp3
>>> setup.py
>>> speakgui4.py
>>> speak_gui2.py
>>> speak_gui3.py
>>> thumb.PNG

Examples:

Ex. 1: How many files are there in the subdirectories?

In this example, we look for the number of files that are included in all the directory and its subdirectories.

import os

def count(dir, counter=0):
    "returns number of files in dir and subdirs"
    for pack in os.walk(dir):
        for f in pack[2]:
            counter += 1
    return dir + " : " + str(counter) + "files"

print(count("F:\python"))

>>> "F:\python" : 12057 files"

Ex.2: How to copy all files from a directory to another?

A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.

import os
import shutil
from path import path

destination = "F:\file_copied"
# os.makedirs(destination)

def copyfile(dir, filetype="pptx", counter=0):
    "Searches for pptx (or other - pptx is the default) files and copies them"
    for pack in os.walk(dir):
        for f in pack[2]:
            if f.endswith(filetype):
                fullpath = pack[0] + "\" + f
                print(fullpath)
                shutil.copy(fullpath, destination)
                counter += 1
    if counter > 0:
        print("-" * 30)
        print("	==> Found in: `" + dir + "` : " + str(counter) + " files
")

for dir in os.listdir():
    "searches for folders that starts with `_`"
    if dir[0] == "_":
        # copyfile(dir, filetype="pdf")
        copyfile(dir, filetype="txt")


>>> _compiti18Compito Contabilità 1conti.txt
>>> _compiti18Compito Contabilità 1modula4.txt
>>> _compiti18Compito Contabilità 1moduloa4.txt
>>> ------------------------
>>> ==> Found in: `_compiti18` : 3 files

Ex. 3: How to get all the files in a txt file

In case you want to create a txt file with all the file names:

import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
    for eachfile in os.listdir():
        mylist += eachfile + "
"
    file.write(mylist)

Example: txt with all the files of an hard drive

"""
We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""

import os

# see all the methods of os
# print(*dir(os), sep=", ")
listafile = []
percorso = []
with open("lista_file.txt", "w", encoding="utf-8") as testo:
    for root, dirs, files in os.walk("D:\"):
        for file in files:
            listafile.append(file)
            percorso.append(root + "\" + file)
            testo.write(file + "
")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
    for file in listafile:
        testo_ordinato.write(file + "
")

with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
    for file in percorso:
        file_percorso.write(file + "
")

os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")

All the file of C: in one text file

This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.

import os

with open("file.txt", "w", encoding="utf-8") as filewrite:
    for r, d, f in os.walk("C:\"):
        for file in f:
            filewrite.write(f"{r + file}
")

How to write a file with all paths in a folder of a type

With this function you can create a txt file that will have the name of a type of file that you look for (ex. pngfile.txt) with all the full path of all the files of that type. It can be useful sometimes, I think.

import os

def searchfiles(extension=".ttf", folder="H:\"):
    "Create a txt file with all the file of a type"
    with open(extension[1:] + "file.txt", "w", encoding="utf-8") as filewrite:
        for r, d, f in os.walk(folder):
            for file in f:
                if file.endswith(extension):
                    filewrite.write(f"{r + file}
")

# looking for png file (fonts) in the hard disk H:
searchfiles(".png", "H:\")

>>> H:4bs_18Dolphins5.png
>>> H:4bs_18Dolphins6.png
>>> H:4bs_18Dolphins7.png
>>> H:5_18marketing htmlassetsimageslogo2.png
>>> H:7z001.png
>>> H:7z002.png

(New) Find all files and open them with tkinter GUI

I just wanted to add in this 2019 a little app to search for all files in a dir and be able to open them by doubleclicking on the name of the file in the list. enter image description here

import tkinter as tk
import os

def searchfiles(extension=".txt", folder="H:\"):
    "insert all files in the listbox"
    for r, d, f in os.walk(folder):
        for file in f:
            if file.endswith(extension):
                lb.insert(0, r + "\" + file)

def open_file():
    os.startfile(lb.get(lb.curselection()[0]))

root = tk.Tk()
root.geometry("400x400")
bt = tk.Button(root, text="Search", command=lambda:searchfiles(".png", "H:\"))
bt.pack()
lb = tk.Listbox(root)
lb.pack(fill="both", expand=1)
lb.bind("<Double-Button>", lambda x: open_file())
root.mainloop()

Answer #3

-----> pip install gensim config --global http.sslVerify false

Just install any package with the "config --global http.sslVerify false" statement

You can ignore SSL errors by setting pypi.org and files.pythonhosted.org as trusted hosts.

$ pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org <package_name>

Note: Sometime during April 2018, the Python Package Index was migrated from pypi.python.org to pypi.org. This means "trusted-host" commands using the old domain no longer work.

Permanent Fix

Since the release of pip 10.0, you should be able to fix this permanently just by upgrading pip itself:

$ pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org pip setuptools

Or by just reinstalling it to get the latest version:

$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py

(… and then running get-pip.py with the relevant Python interpreter).

pip install <otherpackage> should just work after this. If not, then you will need to do more, as explained below.


You may want to add the trusted hosts and proxy to your config file.

pip.ini (Windows) or pip.conf (unix)

[global]
trusted-host = pypi.python.org
               pypi.org
               files.pythonhosted.org

Alternate Solutions (Less secure)

Most of the answers could pose a security issue.

Two of the workarounds that help in installing most of the python packages with ease would be:

  • Using easy_install: if you are really lazy and don"t want to waste much time, use easy_install <package_name>. Note that some packages won"t be found or will give small errors.
  • Using Wheel: download the Wheel of the python package and use the pip command pip install wheel_package_name.whl to install the package.

Answer #4

It helps to install a python package foo on your machine (can also be in virtualenv) so that you can import the package foo from other projects and also from [I]Python prompts.

It does the similar job of pip, easy_install etc.,


Using setup.py

Let"s start with some definitions:

Package - A folder/directory that contains __init__.py file.
Module - A valid python file with .py extension.
Distribution - How one package relates to other packages and modules.

Let"s say you want to install a package named foo. Then you do,

$ git clone https://github.com/user/foo  
$ cd foo
$ python setup.py install

Instead, if you don"t want to actually install it but still would like to use it. Then do,

$ python setup.py develop  

This command will create symlinks to the source directory within site-packages instead of copying things. Because of this, it is quite fast (particularly for large packages).


Creating setup.py

If you have your package tree like,

foo
├── foo
│   ├── data_struct.py
│   ├── __init__.py
│   └── internals.py
├── README
├── requirements.txt
└── setup.py

Then, you do the following in your setup.py script so that it can be installed on some machine:

from setuptools import setup

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   author="Man Foo",
   author_email="[email protected]",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
)

Instead, if your package tree is more complex like the one below:

foo
├── foo
│   ├── data_struct.py
│   ├── __init__.py
│   └── internals.py
├── README
├── requirements.txt
├── scripts
│   ├── cool
│   └── skype
└── setup.py

Then, your setup.py in this case would be like:

from setuptools import setup

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   author="Man Foo",
   author_email="[email protected]",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
   scripts=[
            "scripts/cool",
            "scripts/skype",
           ]
)

Add more stuff to (setup.py) & make it decent:

from setuptools import setup

with open("README", "r") as f:
    long_description = f.read()

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   license="MIT",
   long_description=long_description,
   author="Man Foo",
   author_email="[email protected]",
   url="http://www.foopackage.com/",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
   scripts=[
            "scripts/cool",
            "scripts/skype",
           ]
)

The long_description is used in pypi.org as the README description of your package.


And finally, you"re now ready to upload your package to PyPi.org so that others can install your package using pip install yourpackage.

At this point there are two options.

  • publish in the temporary test.pypi.org server to make oneself familiarize with the procedure, and then publish it on the permanent pypi.org server for the public to use your package.
  • publish straight away on the permanent pypi.org server, if you are already familiar with the procedure and have your user credentials (e.g., username, password, package name)

Once your package name is registered in pypi.org, nobody can claim or use it. Python packaging suggests the twine package for uploading purposes (of your package to PyPi). Thus,

(1) the first step is to locally build the distributions using:

# prereq: wheel (pip install wheel)  
$ python setup.py sdist bdist_wheel   

(2) then using twine for uploading either to test.pypi.org or pypi.org:

$ twine upload --repository testpypi dist/*  
username: ***  
password: ***  

It will take few minutes for the package to appear on test.pypi.org. Once you"re satisfied with it, you can then upload your package to the real & permanent index of pypi.org simply with:

$ twine upload dist/*  

Optionally, you can also sign the files in your package with a GPG by:

$ twine upload dist/* --sign 

Bonus Reading:

Answer #5

tl;dr / quick fix

  • Don"t decode/encode willy nilly
  • Don"t assume your strings are UTF-8 encoded
  • Try to convert strings to Unicode strings as soon as possible in your code
  • Fix your locale: How to solve UnicodeDecodeError in Python 3.6?
  • Don"t be tempted to use quick reload hacks

Unicode Zen in Python 2.x - The Long Version

Without seeing the source it"s difficult to know the root cause, so I"ll have to speak generally.

UnicodeDecodeError: "ascii" codec can"t decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string.

In brief, Unicode strings are an entirely separate type of Python string that does not contain any encoding. They only hold Unicode point codes and therefore can hold any Unicode point from across the entire spectrum. Strings contain encoded text, beit UTF-8, UTF-16, ISO-8895-1, GBK, Big5 etc. Strings are decoded to Unicode and Unicodes are encoded to strings. Files and text data are always transferred in encoded strings.

The Markdown module authors probably use unicode() (where the exception is thrown) as a quality gate to the rest of the code - it will convert ASCII or re-wrap existing Unicodes strings to a new Unicode string. The Markdown authors can"t know the encoding of the incoming string so will rely on you to decode strings to Unicode strings before passing to Markdown.

Unicode strings can be declared in your code using the u prefix to strings. E.g.

>>> my_u = u"my ünicôdé strįng"
>>> type(my_u)
<type "unicode">

Unicode strings may also come from file, databases and network modules. When this happens, you don"t need to worry about the encoding.

Gotchas

Conversion from str to Unicode can happen even when you don"t explicitly call unicode().

The following scenarios cause UnicodeDecodeError exceptions:

# Explicit conversion without encoding
unicode("€")

# New style format string into Unicode string
# Python will try to convert value string to Unicode first
u"The currency is: {}".format("€")

# Old style format string into Unicode string
# Python will try to convert value string to Unicode first
u"The currency is: %s" % "€"

# Append string to Unicode
# Python will try to convert string to Unicode first
u"The currency is: " + "€"         

Examples

In the following diagram, you can see how the word café has been encoded in either "UTF-8" or "Cp1252" encoding depending on the terminal type. In both examples, caf is just regular ascii. In UTF-8, é is encoded using two bytes. In "Cp1252", é is 0xE9 (which is also happens to be the Unicode point value (it"s no coincidence)). The correct decode() is invoked and conversion to a Python Unicode is successfull: Diagram of a string being converted to a Python Unicode string

In this diagram, decode() is called with ascii (which is the same as calling unicode() without an encoding given). As ASCII can"t contain bytes greater than 0x7F, this will throw a UnicodeDecodeError exception:

Diagram of a string being converted to a Python Unicode string with the wrong encoding

The Unicode Sandwich

It"s good practice to form a Unicode sandwich in your code, where you decode all incoming data to Unicode strings, work with Unicodes, then encode to strs on the way out. This saves you from worrying about the encoding of strings in the middle of your code.

Input / Decode

Source code

If you need to bake non-ASCII into your source code, just create Unicode strings by prefixing the string with a u. E.g.

u"Zürich"

To allow Python to decode your source code, you will need to add an encoding header to match the actual encoding of your file. For example, if your file was encoded as "UTF-8", you would use:

# encoding: utf-8

This is only necessary when you have non-ASCII in your source code.

Files

Usually non-ASCII data is received from a file. The io module provides a TextWrapper that decodes your file on the fly, using a given encoding. You must use the correct encoding for the file - it can"t be easily guessed. For example, for a UTF-8 file:

import io
with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file:
     my_unicode_string = my_file.read() 

my_unicode_string would then be suitable for passing to Markdown. If a UnicodeDecodeError from the read() line, then you"ve probably used the wrong encoding value.

CSV Files

The Python 2.7 CSV module does not support non-ASCII characters üò©. Help is at hand, however, with https://pypi.python.org/pypi/backports.csv.

Use it like above but pass the opened file to it:

from backports import csv
import io
with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file:
    for row in csv.reader(my_file):
        yield row

Databases

Most Python database drivers can return data in Unicode, but usually require a little configuration. Always use Unicode strings for SQL queries.

MySQL

In the connection string add:

charset="utf8",
use_unicode=True

E.g.

>>> db = MySQLdb.connect(host="localhost", user="root", passwd="passwd", db="sandbox", use_unicode=True, charset="utf8")
PostgreSQL

Add:

psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)

HTTP

Web pages can be encoded in just about any encoding. The Content-type header should contain a charset field to hint at the encoding. The content can then be decoded manually against this value. Alternatively, Python-Requests returns Unicodes in response.text.

Manually

If you must decode strings manually, you can simply do my_string.decode(encoding), where encoding is the appropriate encoding. Python 2.x supported codecs are given here: Standard Encodings. Again, if you get UnicodeDecodeError then you"ve probably got the wrong encoding.

The meat of the sandwich

Work with Unicodes as you would normal strs.

Output

stdout / printing

print writes through the stdout stream. Python tries to configure an encoder on stdout so that Unicodes are encoded to the console"s encoding. For example, if a Linux shell"s locale is en_GB.UTF-8, the output will be encoded to UTF-8. On Windows, you will be limited to an 8bit code page.

An incorrectly configured console, such as corrupt locale, can lead to unexpected print errors. PYTHONIOENCODING environment variable can force the encoding for stdout.

Files

Just like input, io.open can be used to transparently convert Unicodes to encoded byte strings.

Database

The same configuration for reading will allow Unicodes to be written directly.

Python 3

Python 3 is no more Unicode capable than Python 2.x is, however it is slightly less confused on the topic. E.g the regular str is now a Unicode string and the old str is now bytes.

The default encoding is UTF-8, so if you .decode() a byte string without giving an encoding, Python 3 uses UTF-8 encoding. This probably fixes 50% of people"s Unicode problems.

Further, open() operates in text mode by default, so returns decoded str (Unicode ones). The encoding is derived from your locale, which tends to be UTF-8 on Un*x systems or an 8-bit code page, such as windows-1251, on Windows boxes.

Why you shouldn"t use sys.setdefaultencoding("utf8")

It"s a nasty hack (there"s a reason you have to use reload) that will only mask problems and hinder your migration to Python 3.x. Understand the problem, fix the root cause and enjoy Unicode zen. See Why should we NOT use sys.setdefaultencoding("utf-8") in a py script? for further details

Answer #6

You can"t.

One workaround is to create clone a new environment and then remove the original one.

First, remember to deactivate your current environment. You can do this with the commands:

  • deactivate on Windows or
  • source deactivate on macOS/Linux.

Then:

conda create --name new_name --clone old_name
conda remove --name old_name --all # or its alias: `conda env remove --name old_name`

Notice there are several drawbacks of this method:

  1. It redownloads packages (you can use --offline flag to disable it)
  2. Time consumed on copying environment"s files
  3. Temporary double disk usage

There is an open issue requesting this feature.

Answer #7

Explanation

From PEP 328

Relative imports use a module"s __name__ attribute to determine that module"s position in the package hierarchy. If the module"s name does not contain any package information (e.g. it is set to "__main__") then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.

At some point PEP 338 conflicted with PEP 328:

... relative imports rely on __name__ to determine the current module"s position in the package hierarchy. In a main module, the value of __name__ is always "__main__", so explicit relative imports will always fail (as they only work for a module inside a package)

and to address the issue, PEP 366 introduced the top level variable __package__:

By adding a new module level attribute, this PEP allows relative imports to work automatically if the module is executed using the -m switch. A small amount of boilerplate in the module itself will allow the relative imports to work when the file is executed by name. [...] When it [the attribute] is present, relative imports will be based on this attribute rather than the module __name__ attribute. [...] When the main module is specified by its filename, then the __package__ attribute will be set to None. [...] When the import system encounters an explicit relative import in a module without __package__ set (or with it set to None), it will calculate and store the correct value (__name__.rpartition(".")[0] for normal modules and __name__ for package initialisation modules)

(emphasis mine)

If the __name__ is "__main__", __name__.rpartition(".")[0] returns empty string. This is why there"s empty string literal in the error description:

SystemError: Parent module "" not loaded, cannot perform relative import

The relevant part of the CPython"s PyImport_ImportModuleLevelObject function:

if (PyDict_GetItem(interp->modules, package) == NULL) {
    PyErr_Format(PyExc_SystemError,
            "Parent module %R not loaded, cannot perform relative "
            "import", package);
    goto error;
}

CPython raises this exception if it was unable to find package (the name of the package) in interp->modules (accessible as sys.modules). Since sys.modules is "a dictionary that maps module names to modules which have already been loaded", it"s now clear that the parent module must be explicitly absolute-imported before performing relative import.

Note: The patch from the issue 18018 has added another if block, which will be executed before the code above:

if (PyUnicode_CompareWithASCIIString(package, "") == 0) {
    PyErr_SetString(PyExc_ImportError,
            "attempted relative import with no known parent package");
    goto error;
} /* else if (PyDict_GetItem(interp->modules, package) == NULL) {
    ...
*/

If package (same as above) is empty string, the error message will be

ImportError: attempted relative import with no known parent package

However, you will only see this in Python 3.6 or newer.

Solution #1: Run your script using -m

Consider a directory (which is a Python package):

.
├── package
│   ├── __init__.py
│   ├── module.py
│   └── standalone.py

All of the files in package begin with the same 2 lines of code:

from pathlib import Path
print("Running" if __name__ == "__main__" else "Importing", Path(__file__).resolve())

I"m including these two lines only to make the order of operations obvious. We can ignore them completely, since they don"t affect the execution.

__init__.py and module.py contain only those two lines (i.e., they are effectively empty).

standalone.py additionally attempts to import module.py via relative import:

from . import module  # explicit relative import

We"re well aware that /path/to/python/interpreter package/standalone.py will fail. However, we can run the module with the -m command line option that will "search sys.path for the named module and execute its contents as the __main__ module":

[email protected]:~$ python3 -i -m package.standalone
Importing /home/vaultah/package/__init__.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/module.py
>>> __file__
"/home/vaultah/package/standalone.py"
>>> __package__
"package"
>>> # The __package__ has been correctly set and module.py has been imported.
... # What"s inside sys.modules?
... import sys
>>> sys.modules["__main__"]
<module "package.standalone" from "/home/vaultah/package/standalone.py">
>>> sys.modules["package.module"]
<module "package.module" from "/home/vaultah/package/module.py">
>>> sys.modules["package"]
<module "package" from "/home/vaultah/package/__init__.py">

-m does all the importing stuff for you and automatically sets __package__, but you can do that yourself in the

Solution #2: Set __package__ manually

Please treat it as a proof of concept rather than an actual solution. It isn"t well-suited for use in real-world code.

PEP 366 has a workaround to this problem, however, it"s incomplete, because setting __package__ alone is not enough. You"re going to need to import at least N preceding packages in the module hierarchy, where N is the number of parent directories (relative to the directory of the script) that will be searched for the module being imported.

Thus,

  1. Add the parent directory of the Nth predecessor of the current module to sys.path

  2. Remove the current file"s directory from sys.path

  3. Import the parent module of the current module using its fully-qualified name

  4. Set __package__ to the fully-qualified name from 2

  5. Perform the relative import

I"ll borrow files from the Solution #1 and add some more subpackages:

package
├── __init__.py
├── module.py
└── subpackage
    ├── __init__.py
    └── subsubpackage
        ├── __init__.py
        └── standalone.py

This time standalone.py will import module.py from the package package using the following relative import

from ... import module  # N = 3

We"ll need to precede that line with the boilerplate code, to make it work.

import sys
from pathlib import Path

if __name__ == "__main__" and __package__ is None:
    file = Path(__file__).resolve()
    parent, top = file.parent, file.parents[3]

    sys.path.append(str(top))
    try:
        sys.path.remove(str(parent))
    except ValueError: # Already removed
        pass

    import package.subpackage.subsubpackage
    __package__ = "package.subpackage.subsubpackage"

from ... import module # N = 3

It allows us to execute standalone.py by filename:

[email protected]:~$ python3 package/subpackage/subsubpackage/standalone.py
Running /home/vaultah/package/subpackage/subsubpackage/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/subpackage/__init__.py
Importing /home/vaultah/package/subpackage/subsubpackage/__init__.py
Importing /home/vaultah/package/module.py

A more general solution wrapped in a function can be found here. Example usage:

if __name__ == "__main__" and __package__ is None:
    import_parents(level=3) # N = 3

from ... import module
from ...module.submodule import thing

Solution #3: Use absolute imports and setuptools

The steps are -

  1. Replace explicit relative imports with equivalent absolute imports

  2. Install package to make it importable

For instance, the directory structure may be as follows

.
├── project
│   ├── package
│   │   ├── __init__.py
│   │   ├── module.py
│   │   └── standalone.py
│   └── setup.py

where setup.py is

from setuptools import setup, find_packages
setup(
    name = "your_package_name",
    packages = find_packages(),
)

The rest of the files were borrowed from the Solution #1.

Installation will allow you to import the package regardless of your working directory (assuming there"ll be no naming issues).

We can modify standalone.py to use this advantage (step 1):

from package import module  # absolute import

Change your working directory to project and run /path/to/python/interpreter setup.py install --user (--user installs the package in your site-packages directory) (step 2):

[email protected]:~$ cd project
[email protected]:~/project$ python3 setup.py install --user

Let"s verify that it"s now possible to run standalone.py as a script:

[email protected]:~/project$ python3 -i package/standalone.py
Running /home/vaultah/project/package/standalone.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py
>>> module
<module "package.module" from "/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py">
>>> import sys
>>> sys.modules["package"]
<module "package" from "/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py">
>>> sys.modules["package.module"]
<module "package.module" from "/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py">

Note: If you decide to go down this route, you"d be better off using virtual environments to install packages in isolation.

Solution #4: Use absolute imports and some boilerplate code

Frankly, the installation is not necessary - you could add some boilerplate code to your script to make absolute imports work.

I"m going to borrow files from Solution #1 and change standalone.py:

  1. Add the parent directory of package to sys.path before attempting to import anything from package using absolute imports:

    import sys
    from pathlib import Path # if you haven"t already done so
    file = Path(__file__).resolve()
    parent, root = file.parent, file.parents[1]
    sys.path.append(str(root))
    
    # Additionally remove the current file"s directory from sys.path
    try:
        sys.path.remove(str(parent))
    except ValueError: # Already removed
        pass
    
  2. Replace the relative import by the absolute import:

    from package import module  # absolute import
    

standalone.py runs without problems:

[email protected]:~$ python3 -i package/standalone.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/module.py
>>> module
<module "package.module" from "/home/vaultah/package/module.py">
>>> import sys
>>> sys.modules["package"]
<module "package" from "/home/vaultah/package/__init__.py">
>>> sys.modules["package.module"]
<module "package.module" from "/home/vaultah/package/module.py">

I feel that I should warn you: try not to do this, especially if your project has a complex structure.


As a side note, PEP 8 recommends the use of absolute imports, but states that in some scenarios explicit relative imports are acceptable:

Absolute imports are recommended, as they are usually more readable and tend to be better behaved (or at least give better error messages). [...] However, explicit relative imports are an acceptable alternative to absolute imports, especially when dealing with complex package layouts where using absolute imports would be unnecessarily verbose.

Answer #8

Is this the correct use of conftest.py?

Yes it is. Fixtures are a potential and common use of conftest.py. The fixtures that you will define will be shared among all tests in your test suite. However, defining fixtures in the root conftest.py might be useless and it would slow down testing if such fixtures are not used by all tests.

Does it have other uses?

Yes it does.

  • Fixtures: Define fixtures for static data used by tests. This data can be accessed by all tests in the suite unless specified otherwise. This could be data as well as helpers of modules which will be passed to all tests.

  • External plugin loading: conftest.py is used to import external plugins or modules. By defining the following global variable, pytest will load the module and make it available for its test. Plugins are generally files defined in your project or other modules which might be needed in your tests. You can also load a set of predefined plugins as explained here.

    pytest_plugins = "someapp.someplugin"

  • Hooks: You can specify hooks such as setup and teardown methods and much more to improve your tests. For a set of available hooks, read Hooks link. Example:

      def pytest_runtest_setup(item):
           """ called before ``pytest_runtest_call(item). """
           #do some stuff`
    
  • Test root path: This is a bit of a hidden feature. By defining conftest.py in your root path, you will have pytest recognizing your application modules without specifying PYTHONPATH. In the background, py.test modifies your sys.path by including all submodules which are found from the root path.

Can I have more than one conftest.py file?

Yes you can and it is strongly recommended if your test structure is somewhat complex. conftest.py files have directory scope. Therefore, creating targeted fixtures and helpers is good practice.

When would I want to do that? Examples will be appreciated.

Several cases could fit:

Creating a set of tools or hooks for a particular group of tests.

root/mod/conftest.py

def pytest_runtest_setup(item):
    print("I am mod")
    #do some stuff


test root/mod2/test.py will NOT produce "I am mod"

Loading a set of fixtures for some tests but not for others.

root/mod/conftest.py

@pytest.fixture()
def fixture():
    return "some stuff"

root/mod2/conftest.py

@pytest.fixture()
def fixture():
    return "some other stuff"

root/mod2/test.py

def test(fixture):
    print(fixture)

Will print "some other stuff".

Overriding hooks inherited from the root conftest.py.

root/mod/conftest.py

def pytest_runtest_setup(item):
    print("I am mod")
    #do some stuff

root/conftest.py

def pytest_runtest_setup(item):
    print("I am root")
    #do some stuff

By running any test inside root/mod, only "I am mod" is printed.

You can read more about conftest.py here.

EDIT:

What if I need plain-old helper functions to be called from a number of tests in different modules - will they be available to me if I put them in a conftest.py? Or should I simply put them in a helpers.py module and import and use it in my test modules?

You can use conftest.py to define your helpers. However, you should follow common practice. Helpers can be used as fixtures at least in pytest. For example in my tests I have a mock redis helper which I inject into my tests this way.

root/helper/redis/redis.py

@pytest.fixture
def mock_redis():
    return MockRedis()

root/tests/stuff/conftest.py

pytest_plugin="helper.redis.redis"

root/tests/stuff/test.py

def test(mock_redis):
    print(mock_redis.get("stuff"))

This will be a test module that you can freely import in your tests. NOTE that you could potentially name redis.py as conftest.py if your module redis contains more tests. However, that practice is discouraged because of ambiguity.

If you want to use conftest.py, you can simply put that helper in your root conftest.py and inject it when needed.

root/tests/conftest.py

@pytest.fixture
def mock_redis():
    return MockRedis()

root/tests/stuff/test.py

def test(mock_redis):
    print(mock_redis.get(stuff))

Another thing you can do is to write an installable plugin. In that case your helper can be written anywhere but it needs to define an entry point to be installed in your and other potential test frameworks. See this.

If you don"t want to use fixtures, you could of course define a simple helper and just use the plain old import wherever it is needed.

root/tests/helper/redis.py

class MockRedis():
    # stuff

root/tests/stuff/test.py

from helper.redis import MockRedis

def test():
    print(MockRedis().get(stuff))

However, here you might have problems with the path since the module is not in a child folder of the test. You should be able to overcome this (not tested) by adding an __init__.py to your helper

root/tests/helper/init.py

from .redis import MockRedis

Or simply adding the helper module to your PYTHONPATH.

Answer #9

I would suggest reading PEP 483 and PEP 484 and watching this presentation by Guido on type hinting.

In a nutshell: Type hinting is literally what the words mean. You hint the type of the object(s) you"re using.

Due to the dynamic nature of Python, inferring or checking the type of an object being used is especially hard. This fact makes it hard for developers to understand what exactly is going on in code they haven"t written and, most importantly, for type checking tools found in many IDEs (PyCharm and PyDev come to mind) that are limited due to the fact that they don"t have any indicator of what type the objects are. As a result they resort to trying to infer the type with (as mentioned in the presentation) around 50% success rate.


To take two important slides from the type hinting presentation:

Why type hints?

  1. Helps type checkers: By hinting at what type you want the object to be the type checker can easily detect if, for instance, you"re passing an object with a type that isn"t expected.
  2. Helps with documentation: A third person viewing your code will know what is expected where, ergo, how to use it without getting them TypeErrors.
  3. Helps IDEs develop more accurate and robust tools: Development Environments will be better suited at suggesting appropriate methods when know what type your object is. You have probably experienced this with some IDE at some point, hitting the . and having methods/attributes pop up which aren"t defined for an object.

Why use static type checkers?

  • Find bugs sooner: This is self-evident, I believe.
  • The larger your project the more you need it: Again, makes sense. Static languages offer a robustness and control that dynamic languages lack. The bigger and more complex your application becomes the more control and predictability (from a behavioral aspect) you require.
  • Large teams are already running static analysis: I"m guessing this verifies the first two points.

As a closing note for this small introduction: This is an optional feature and, from what I understand, it has been introduced in order to reap some of the benefits of static typing.

You generally do not need to worry about it and definitely don"t need to use it (especially in cases where you use Python as an auxiliary scripting language). It should be helpful when developing large projects as it offers much needed robustness, control and additional debugging capabilities.


Type hinting with mypy:

In order to make this answer more complete, I think a little demonstration would be suitable. I"ll be using mypy, the library which inspired Type Hints as they are presented in the PEP. This is mainly written for anybody bumping into this question and wondering where to begin.

Before I do that let me reiterate the following: PEP 484 doesn"t enforce anything; it is simply setting a direction for function annotations and proposing guidelines for how type checking can/should be performed. You can annotate your functions and hint as many things as you want; your scripts will still run regardless of the presence of annotations because Python itself doesn"t use them.

Anyways, as noted in the PEP, hinting types should generally take three forms:

  • Function annotations (PEP 3107).
  • Stub files for built-in/user modules.
  • Special # type: type comments that complement the first two forms. (See: What are variable annotations? for a Python 3.6 update for # type: type comments)

Additionally, you"ll want to use type hints in conjunction with the new typing module introduced in Py3.5. In it, many (additional) ABCs (abstract base classes) are defined along with helper functions and decorators for use in static checking. Most ABCs in collections.abc are included, but in a generic form in order to allow subscription (by defining a __getitem__() method).

For anyone interested in a more in-depth explanation of these, the mypy documentation is written very nicely and has a lot of code samples demonstrating/describing the functionality of their checker; it is definitely worth a read.

Function annotations and special comments:

First, it"s interesting to observe some of the behavior we can get when using special comments. Special # type: type comments can be added during variable assignments to indicate the type of an object if one cannot be directly inferred. Simple assignments are generally easily inferred but others, like lists (with regard to their contents), cannot.

Note: If we want to use any derivative of containers and need to specify the contents for that container we must use the generic types from the typing module. These support indexing.

# Generic List, supports indexing.
from typing import List

# In this case, the type is easily inferred as type: int.
i = 0

# Even though the type can be inferred as of type list
# there is no way to know the contents of this list.
# By using type: List[str] we indicate we want to use a list of strings.
a = []  # type: List[str]

# Appending an int to our list
# is statically not correct.
a.append(i)

# Appending a string is fine.
a.append("i")

print(a)  # [0, "i"]

If we add these commands to a file and execute them with our interpreter, everything works just fine and print(a) just prints the contents of list a. The # type comments have been discarded, treated as plain comments which have no additional semantic meaning.

By running this with mypy, on the other hand, we get the following response:

(Python3)[email protected]: mypy typeHintsCode.py
typesInline.py:14: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str"

Indicating that a list of str objects cannot contain an int, which, statically speaking, is sound. This can be fixed by either abiding to the type of a and only appending str objects or by changing the type of the contents of a to indicate that any value is acceptable (Intuitively performed with List[Any] after Any has been imported from typing).

Function annotations are added in the form param_name : type after each parameter in your function signature and a return type is specified using the -> type notation before the ending function colon; all annotations are stored in the __annotations__ attribute for that function in a handy dictionary form. Using a trivial example (which doesn"t require extra types from the typing module):

def annotated(x: int, y: str) -> bool:
    return x < y

The annotated.__annotations__ attribute now has the following values:

{"y": <class "str">, "return": <class "bool">, "x": <class "int">}

If we"re a complete newbie, or we are familiar with Python 2.7 concepts and are consequently unaware of the TypeError lurking in the comparison of annotated, we can perform another static check, catch the error and save us some trouble:

(Python3)[email protected]: mypy typeHintsCode.py
typeFunction.py: note: In function "annotated":
typeFunction.py:2: error: Unsupported operand types for > ("str" and "int")

Among other things, calling the function with invalid arguments will also get caught:

annotated(20, 20)

# mypy complains:
typeHintsCode.py:4: error: Argument 2 to "annotated" has incompatible type "int"; expected "str"

These can be extended to basically any use case and the errors caught extend further than basic calls and operations. The types you can check for are really flexible and I have merely given a small sneak peak of its potential. A look in the typing module, the PEPs or the mypy documentation will give you a more comprehensive idea of the capabilities offered.

Stub files:

Stub files can be used in two different non mutually exclusive cases:

  • You need to type check a module for which you do not want to directly alter the function signatures
  • You want to write modules and have type-checking but additionally want to separate annotations from content.

What stub files (with an extension of .pyi) are is an annotated interface of the module you are making/want to use. They contain the signatures of the functions you want to type-check with the body of the functions discarded. To get a feel of this, given a set of three random functions in a module named randfunc.py:

def message(s):
    print(s)

def alterContents(myIterable):
    return [i for i in myIterable if i % 2 == 0]

def combine(messageFunc, itFunc):
    messageFunc("Printing the Iterable")
    a = alterContents(range(1, 20))
    return set(a)

We can create a stub file randfunc.pyi, in which we can place some restrictions if we wish to do so. The downside is that somebody viewing the source without the stub won"t really get that annotation assistance when trying to understand what is supposed to be passed where.

Anyway, the structure of a stub file is pretty simplistic: Add all function definitions with empty bodies (pass filled) and supply the annotations based on your requirements. Here, let"s assume we only want to work with int types for our Containers.

# Stub for randfucn.py
from typing import Iterable, List, Set, Callable

def message(s: str) -> None: pass

def alterContents(myIterable: Iterable[int])-> List[int]: pass

def combine(
    messageFunc: Callable[[str], Any],
    itFunc: Callable[[Iterable[int]], List[int]]
)-> Set[int]: pass

The combine function gives an indication of why you might want to use annotations in a different file, they some times clutter up the code and reduce readability (big no-no for Python). You could of course use type aliases but that sometime confuses more than it helps (so use them wisely).


This should get you familiarized with the basic concepts of type hints in Python. Even though the type checker used has been mypy you should gradually start to see more of them pop-up, some internally in IDEs (PyCharm,) and others as standard Python modules.

I"ll try and add additional checkers/related packages in the following list when and if I find them (or if suggested).

Checkers I know of:

  • Mypy: as described here.
  • PyType: By Google, uses different notation from what I gather, probably worth a look.

Related Packages/Projects:

  • typeshed: Official Python repository housing an assortment of stub files for the standard library.

The typeshed project is actually one of the best places you can look to see how type hinting might be used in a project of your own. Let"s take as an example the __init__ dunders of the Counter class in the corresponding .pyi file:

class Counter(Dict[_T, int], Generic[_T]):
        @overload
        def __init__(self) -> None: ...
        @overload
        def __init__(self, Mapping: Mapping[_T, int]) -> None: ...
        @overload
        def __init__(self, iterable: Iterable[_T]) -> None: ...

Where _T = TypeVar("_T") is used to define generic classes. For the Counter class we can see that it can either take no arguments in its initializer, get a single Mapping from any type to an int or take an Iterable of any type.


Notice: One thing I forgot to mention was that the typing module has been introduced on a provisional basis. From PEP 411:

A provisional package may have its API modified prior to "graduating" into a "stable" state. On one hand, this state provides the package with the benefits of being formally part of the Python distribution. On the other hand, the core development team explicitly states that no promises are made with regards to the the stability of the package"s API, which may change for the next release. While it is considered an unlikely outcome, such packages may even be removed from the standard library without a deprecation period if the concerns regarding their API or maintenance prove well-founded.

So take things here with a pinch of salt; I"m doubtful it will be removed or altered in significant ways, but one can never know.


** Another topic altogether, but valid in the scope of type-hints: PEP 526: Syntax for Variable Annotations is an effort to replace # type comments by introducing new syntax which allows users to annotate the type of variables in simple varname: type statements.

See What are variable annotations?, as previously mentioned, for a small introduction to these.

Answer #10

Whatever is assigned to the files variable is incorrect. Use the following code.

import glob
import os

list_of_files = glob.glob("/path/to/folder/*") # * means all if need specific format then *.csv
latest_file = max(list_of_files, key=os.path.getctime)
print(latest_file)

Rename multiple files with Python: StackOverflow Questions

Rename a dictionary key

Is there a way to rename a dictionary key, without reassigning its value to a new name and removing the old name key; and without iterating through dict key/value?

In case of OrderedDict, do the same, while keeping that key"s position.

How to rename a file using Python

I want to change a.txt to b.kml.

Rename multiple files in a directory in Python

I"m trying to rename some files in a directory using Python.

Say I have a file called CHEESE_CHEESE_TYPE.*** and want to remove CHEESE_ so my resulting filename would be CHEESE_TYPE

I"m trying to use the os.path.split but it"s not working properly. I have also considered using string manipulations, but have not been successful with that either.

How can I rename a conda environment?

I have a conda environment named old_name, how can I change its name to new_name without breaking references?

Rename specific column(s) in pandas

I"ve got a dataframe called data. How would I rename the only one column header? For example gdp to log(gdp)?

data =
    y  gdp  cap
0   1    2    5
1   2    3    9
2   8    7    2
3   3    4    7
4   6    7    7
5   4    8    3
6   8    2    8
7   9    9   10
8   6    6    4
9  10   10    7

Django - How to rename a model field using South?

I would like to change a name of specific fields in a model:

class Foo(models.Model):
    name = models.CharField()
    rel  = models.ForeignKey(Bar)

should change to:

class Foo(models.Model):
    full_name     = models.CharField()
    odd_relation  = models.ForeignKey(Bar)

What"s the easiest way to do this using South?

Rename Pandas DataFrame Index

I"ve a csv file without header, with a DateTime index. I want to rename the index and column name, but with df.rename() only the column name is renamed. Bug? I"m on version 0.12.0

In [2]: df = pd.read_csv(r"D:DataDataTimeSeries_csv//seriesSM.csv", header=None, parse_dates=[[0]], index_col=[0] )

In [3]: df.head()
Out[3]: 
                   1
0                   
2002-06-18  0.112000
2002-06-22  0.190333
2002-06-26  0.134000
2002-06-30  0.093000
2002-07-04  0.098667

In [4]: df.rename(index={0:"Date"}, columns={1:"SM"}, inplace=True)

In [5]: df.head()
Out[5]: 
                  SM
0                   
2002-06-18  0.112000
2002-06-22  0.190333
2002-06-26  0.134000
2002-06-30  0.093000
2002-07-04  0.098667

Easiest way to rename a model using Django/South?

I"ve been hunting for an answer to this on South"s site, Google, and SO, but couldn"t find a simple way to do this.

I want to rename a Django model using South. Say you have the following:

class Foo(models.Model):
    name = models.CharField()

class FooTwo(models.Model):
    name = models.CharField()
    foo = models.ForeignKey(Foo)

and you want to convert Foo to Bar, namely

class Bar(models.Model):
    name = models.CharField()

class FooTwo(models.Model):
    name = models.CharField()
    foo = models.ForeignKey(Bar)

To keep it simple, I"m just trying to change the name from Foo to Bar, but ignore the foo member in FooTwo for now.

What"s the easiest way to do this using South?

  1. I could probably do a data migration, but that seems pretty involved.
  2. Write a custom migration, e.g. db.rename_table("city_citystate", "geo_citystate"), but I"m not sure how to fix the foreign key in this case.
  3. An easier way that you know?

Rename an environment with virtualenvwrapper

I have an environment called doors and I would like to rename it to django for the virtualenvwrapper.

I"ve noticed that if I just rename the folder ~/.virtualenvs/doors to django, I can now call workon django, but the environment still says (doors)[email protected].

Answer #1

Quick Answer:

The simplest way to get row counts per group is by calling .size(), which returns a Series:

df.groupby(["col1","col2"]).size()


Usually you want this result as a DataFrame (instead of a Series) so you can do:

df.groupby(["col1", "col2"]).size().reset_index(name="counts")


If you want to find out how to calculate the row counts and other statistics for each group continue reading below.


Detailed example:

Consider the following example dataframe:

In [2]: df
Out[2]: 
  col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17

First let"s use .size() to get the row counts:

In [3]: df.groupby(["col1", "col2"]).size()
Out[3]: 
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64

Then let"s use .size().reset_index(name="counts") to get the row counts:

In [4]: df.groupby(["col1", "col2"]).size().reset_index(name="counts")
Out[4]: 
  col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1


Including results for more statistics

When you want to calculate statistics on grouped data, it usually looks like this:

In [5]: (df
   ...: .groupby(["col1", "col2"])
   ...: .agg({
   ...:     "col3": ["mean", "count"], 
   ...:     "col4": ["median", "min", "count"]
   ...: }))
Out[5]: 
            col4                  col3      
          median   min count      mean count
col1 col2                                   
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1

The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis.

To gain more control over the output I usually split the statistics into individual aggregations that I then combine using join. It looks like this:

In [6]: gb = df.groupby(["col1", "col2"])
   ...: counts = gb.size().to_frame(name="counts")
   ...: (counts
   ...:  .join(gb.agg({"col3": "mean"}).rename(columns={"col3": "col3_mean"}))
   ...:  .join(gb.agg({"col4": "median"}).rename(columns={"col4": "col4_median"}))
   ...:  .join(gb.agg({"col4": "min"}).rename(columns={"col4": "col4_min"}))
   ...:  .reset_index()
   ...: )
   ...: 
Out[6]: 
  col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63



Footnotes

The code used to generate the test data is shown below:

In [1]: import numpy as np
   ...: import pandas as pd 
   ...: 
   ...: keys = np.array([
   ...:         ["A", "B"],
   ...:         ["A", "B"],
   ...:         ["A", "B"],
   ...:         ["A", "B"],
   ...:         ["C", "D"],
   ...:         ["C", "D"],
   ...:         ["C", "D"],
   ...:         ["E", "F"],
   ...:         ["E", "F"],
   ...:         ["G", "H"] 
   ...:         ])
   ...: 
   ...: df = pd.DataFrame(
   ...:     np.hstack([keys,np.random.randn(10,4).round(2)]), 
   ...:     columns = ["col1", "col2", "col3", "col4", "col5", "col6"]
   ...: )
   ...: 
   ...: df[["col3", "col4", "col5", "col6"]] = 
   ...:     df[["col3", "col4", "col5", "col6"]].astype(float)
   ...: 


Disclaimer:

If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop NaN entries in the mean calculation without telling you about it.

Answer #2

This post aims to give readers a primer on SQL-flavored merging with Pandas, how to use it, and when not to use it.

In particular, here"s what this post will go through:

  • The basics - types of joins (LEFT, RIGHT, OUTER, INNER)

    • merging with different column names
    • merging with multiple columns
    • avoiding duplicate merge key column in output

What this post (and other posts by me on this thread) will not go through:

  • Performance-related discussions and timings (for now). Mostly notable mentions of better alternatives, wherever appropriate.
  • Handling suffixes, removing extra columns, renaming outputs, and other specific use cases. There are other (read: better) posts that deal with that, so figure it out!

Note Most examples default to INNER JOIN operations while demonstrating various features, unless otherwise specified.

Furthermore, all the DataFrames here can be copied and replicated so you can play with them. Also, see this post on how to read DataFrames from your clipboard.

Lastly, all visual representation of JOIN operations have been hand-drawn using Google Drawings. Inspiration from here.



Enough talk - just show me how to use merge!

Setup & Basics

np.random.seed(0)
left = pd.DataFrame({"key": ["A", "B", "C", "D"], "value": np.random.randn(4)})
right = pd.DataFrame({"key": ["B", "D", "E", "F"], "value": np.random.randn(4)})

left

  key     value
0   A  1.764052
1   B  0.400157
2   C  0.978738
3   D  2.240893

right

  key     value
0   B  1.867558
1   D -0.977278
2   E  0.950088
3   F -0.151357

For the sake of simplicity, the key column has the same name (for now).

An INNER JOIN is represented by

Note This, along with the forthcoming figures all follow this convention:

  • blue indicates rows that are present in the merge result
  • red indicates rows that are excluded from the result (i.e., removed)
  • green indicates missing values that are replaced with NaNs in the result

To perform an INNER JOIN, call merge on the left DataFrame, specifying the right DataFrame and the join key (at the very least) as arguments.

left.merge(right, on="key")
# Or, if you want to be explicit
# left.merge(right, on="key", how="inner")

  key   value_x   value_y
0   B  0.400157  1.867558
1   D  2.240893 -0.977278

This returns only rows from left and right which share a common key (in this example, "B" and "D).

A LEFT OUTER JOIN, or LEFT JOIN is represented by

This can be performed by specifying how="left".

left.merge(right, on="key", how="left")

  key   value_x   value_y
0   A  1.764052       NaN
1   B  0.400157  1.867558
2   C  0.978738       NaN
3   D  2.240893 -0.977278

Carefully note the placement of NaNs here. If you specify how="left", then only keys from left are used, and missing data from right is replaced by NaN.

And similarly, for a RIGHT OUTER JOIN, or RIGHT JOIN which is...

...specify how="right":

left.merge(right, on="key", how="right")

  key   value_x   value_y
0   B  0.400157  1.867558
1   D  2.240893 -0.977278
2   E       NaN  0.950088
3   F       NaN -0.151357

Here, keys from right are used, and missing data from left is replaced by NaN.

Finally, for the FULL OUTER JOIN, given by

specify how="outer".

left.merge(right, on="key", how="outer")

  key   value_x   value_y
0   A  1.764052       NaN
1   B  0.400157  1.867558
2   C  0.978738       NaN
3   D  2.240893 -0.977278
4   E       NaN  0.950088
5   F       NaN -0.151357

This uses the keys from both frames, and NaNs are inserted for missing rows in both.

The documentation summarizes these various merges nicely:

Enter image description here


Other JOINs - LEFT-Excluding, RIGHT-Excluding, and FULL-Excluding/ANTI JOINs

If you need LEFT-Excluding JOINs and RIGHT-Excluding JOINs in two steps.

For LEFT-Excluding JOIN, represented as

Start by performing a LEFT OUTER JOIN and then filtering (excluding!) rows coming from left only,

(left.merge(right, on="key", how="left", indicator=True)
     .query("_merge == "left_only"")
     .drop("_merge", 1))

  key   value_x  value_y
0   A  1.764052      NaN
2   C  0.978738      NaN

Where,

left.merge(right, on="key", how="left", indicator=True)

  key   value_x   value_y     _merge
0   A  1.764052       NaN  left_only
1   B  0.400157  1.867558       both
2   C  0.978738       NaN  left_only
3   D  2.240893 -0.977278       both

And similarly, for a RIGHT-Excluding JOIN,

(left.merge(right, on="key", how="right", indicator=True)
     .query("_merge == "right_only"")
     .drop("_merge", 1))

  key  value_x   value_y
2   E      NaN  0.950088
3   F      NaN -0.151357

Lastly, if you are required to do a merge that only retains keys from the left or right, but not both (IOW, performing an ANTI-JOIN),

You can do this in similar fashion—

(left.merge(right, on="key", how="outer", indicator=True)
     .query("_merge != "both"")
     .drop("_merge", 1))

  key   value_x   value_y
0   A  1.764052       NaN
2   C  0.978738       NaN
4   E       NaN  0.950088
5   F       NaN -0.151357

Different names for key columns

If the key columns are named differently—for example, left has keyLeft, and right has keyRight instead of key—then you will have to specify left_on and right_on as arguments instead of on:

left2 = left.rename({"key":"keyLeft"}, axis=1)
right2 = right.rename({"key":"keyRight"}, axis=1)

left2

  keyLeft     value
0       A  1.764052
1       B  0.400157
2       C  0.978738
3       D  2.240893

right2

  keyRight     value
0        B  1.867558
1        D -0.977278
2        E  0.950088
3        F -0.151357
left2.merge(right2, left_on="keyLeft", right_on="keyRight", how="inner")

  keyLeft   value_x keyRight   value_y
0       B  0.400157        B  1.867558
1       D  2.240893        D -0.977278

Avoiding duplicate key column in output

When merging on keyLeft from left and keyRight from right, if you only want either of the keyLeft or keyRight (but not both) in the output, you can start by setting the index as a preliminary step.

left3 = left2.set_index("keyLeft")
left3.merge(right2, left_index=True, right_on="keyRight")

    value_x keyRight   value_y
0  0.400157        B  1.867558
1  2.240893        D -0.977278

Contrast this with the output of the command just before (that is, the output of left2.merge(right2, left_on="keyLeft", right_on="keyRight", how="inner")), you"ll notice keyLeft is missing. You can figure out what column to keep based on which frame"s index is set as the key. This may matter when, say, performing some OUTER JOIN operation.


Merging only a single column from one of the DataFrames

For example, consider

right3 = right.assign(newcol=np.arange(len(right)))
right3
  key     value  newcol
0   B  1.867558       0
1   D -0.977278       1
2   E  0.950088       2
3   F -0.151357       3

If you are required to merge only "new_val" (without any of the other columns), you can usually just subset columns before merging:

left.merge(right3[["key", "newcol"]], on="key")

  key     value  newcol
0   B  0.400157       0
1   D  2.240893       1

If you"re doing a LEFT OUTER JOIN, a more performant solution would involve map:

# left["newcol"] = left["key"].map(right3.set_index("key")["newcol"]))
left.assign(newcol=left["key"].map(right3.set_index("key")["newcol"]))

  key     value  newcol
0   A  1.764052     NaN
1   B  0.400157     0.0
2   C  0.978738     NaN
3   D  2.240893     1.0

As mentioned, this is similar to, but faster than

left.merge(right3[["key", "newcol"]], on="key", how="left")

  key     value  newcol
0   A  1.764052     NaN
1   B  0.400157     0.0
2   C  0.978738     NaN
3   D  2.240893     1.0

Merging on multiple columns

To join on more than one column, specify a list for on (or left_on and right_on, as appropriate).

left.merge(right, on=["key1", "key2"] ...)

Or, in the event the names are different,

left.merge(right, left_on=["lkey1", "lkey2"], right_on=["rkey1", "rkey2"])

Other useful merge* operations and functions

This section only covers the very basics, and is designed to only whet your appetite. For more examples and cases, see the documentation on merge, join, and concat as well as the links to the function specifications.



Continue Reading

Jump to other topics in Pandas Merging 101 to continue learning:

*You are here.

Answer #3

TL;DR version:

For the simple case of:

  • I have a text column with a delimiter and I want two columns

The simplest solution is:

df[["A", "B"]] = df["AB"].str.split(" ", 1, expand=True)

You must use expand=True if your strings have a non-uniform number of splits and you want None to replace the missing values.

Notice how, in either case, the .tolist() method is not necessary. Neither is zip().

In detail:

Andy Hayden"s solution is most excellent in demonstrating the power of the str.extract() method.

But for a simple split over a known separator (like, splitting by dashes, or splitting by whitespace), the .str.split() method is enough1. It operates on a column (Series) of strings, and returns a column (Series) of lists:

>>> import pandas as pd
>>> df = pd.DataFrame({"AB": ["A1-B1", "A2-B2"]})
>>> df

      AB
0  A1-B1
1  A2-B2
>>> df["AB_split"] = df["AB"].str.split("-")
>>> df

      AB  AB_split
0  A1-B1  [A1, B1]
1  A2-B2  [A2, B2]

1: If you"re unsure what the first two parameters of .str.split() do, I recommend the docs for the plain Python version of the method.

But how do you go from:

  • a column containing two-element lists

to:

  • two columns, each containing the respective element of the lists?

Well, we need to take a closer look at the .str attribute of a column.

It"s a magical object that is used to collect methods that treat each element in a column as a string, and then apply the respective method in each element as efficient as possible:

>>> upper_lower_df = pd.DataFrame({"U": ["A", "B", "C"]})
>>> upper_lower_df

   U
0  A
1  B
2  C
>>> upper_lower_df["L"] = upper_lower_df["U"].str.lower()
>>> upper_lower_df

   U  L
0  A  a
1  B  b
2  C  c

But it also has an "indexing" interface for getting each element of a string by its index:

>>> df["AB"].str[0]

0    A
1    A
Name: AB, dtype: object

>>> df["AB"].str[1]

0    1
1    2
Name: AB, dtype: object

Of course, this indexing interface of .str doesn"t really care if each element it"s indexing is actually a string, as long as it can be indexed, so:

>>> df["AB"].str.split("-", 1).str[0]

0    A1
1    A2
Name: AB, dtype: object

>>> df["AB"].str.split("-", 1).str[1]

0    B1
1    B2
Name: AB, dtype: object

Then, it"s a simple matter of taking advantage of the Python tuple unpacking of iterables to do

>>> df["A"], df["B"] = df["AB"].str.split("-", 1).str
>>> df

      AB  AB_split   A   B
0  A1-B1  [A1, B1]  A1  B1
1  A2-B2  [A2, B2]  A2  B2

Of course, getting a DataFrame out of splitting a column of strings is so useful that the .str.split() method can do it for you with the expand=True parameter:

>>> df["AB"].str.split("-", 1, expand=True)

    0   1
0  A1  B1
1  A2  B2

So, another way of accomplishing what we wanted is to do:

>>> df = df[["AB"]]
>>> df

      AB
0  A1-B1
1  A2-B2

>>> df.join(df["AB"].str.split("-", 1, expand=True).rename(columns={0:"A", 1:"B"}))

      AB   A   B
0  A1-B1  A1  B1
1  A2-B2  A2  B2

The expand=True version, although longer, has a distinct advantage over the tuple unpacking method. Tuple unpacking doesn"t deal well with splits of different lengths:

>>> df = pd.DataFrame({"AB": ["A1-B1", "A2-B2", "A3-B3-C3"]})
>>> df
         AB
0     A1-B1
1     A2-B2
2  A3-B3-C3
>>> df["A"], df["B"], df["C"] = df["AB"].str.split("-")
Traceback (most recent call last):
  [...]    
ValueError: Length of values does not match length of index
>>> 

But expand=True handles it nicely by placing None in the columns for which there aren"t enough "splits":

>>> df.join(
...     df["AB"].str.split("-", expand=True).rename(
...         columns={0:"A", 1:"B", 2:"C"}
...     )
... )
         AB   A   B     C
0     A1-B1  A1  B1  None
1     A2-B2  A2  B2  None
2  A3-B3-C3  A3  B3    C3

Answer #4

root is the old (pre-conda 4.4) name for the main environment; after conda 4.4, it was renamed to be base. source

What 95% of people actually want

In most cases what you want to do when you say that you want to update Anaconda is to execute the command:

conda update --all

(But this should be preceeded by conda update -n base conda so you have the latest conda version installed)

This will update all packages in the current environment to the latest version -- with the small print being that it may use an older version of some packages in order to satisfy dependency constraints (often this won"t be necessary and when it is necessary the package plan solver will do its best to minimize the impact).

This needs to be executed from the command line, and the best way to get there is from Anaconda Navigator, then the "Environments" tab, then click on the triangle beside the base environment, selecting "Open Terminal":

Open terminal from Navigator

This operation will only update the one selected environment (in this case, the base environment). If you have other environments you"d like to update you can repeat the process above, but first click on the environment. When it is selected there is a triangular marker on the right (see image above, step 3). Or from the command line you can provide the environment name (-n envname) or path (-p /path/to/env), for example to update your dspyr environment from the screenshot above:

conda update -n dspyr --all

Update individual packages

If you are only interested in updating an individual package then simply click on the blue arrow or blue version number in Navigator, e.g. for astroid or astropy in the screenshot above, and this will tag those packages for an upgrade. When you are done you need to click the "Apply" button:

Apply to update individual packages

Or from the command line:

conda update astroid astropy

Updating just the packages in the standard Anaconda Distribution

If you don"t care about package versions and just want "the latest set of all packages in the standard Anaconda Distribution, so long as they work together", then you should take a look at this gist.

Why updating the Anaconda package is almost always a bad idea

In most cases updating the Anaconda package in the package list will have a surprising result: you may actually downgrade many packages (in fact, this is likely if it indicates the version as custom). The gist above provides details.

Leverage conda environments

Your base environment is probably not a good place to try and manage an exact set of packages: it is going to be a dynamic working space with new packages installed and packages randomly updated. If you need an exact set of packages then create a conda environment to hold them. Thanks to the conda package cache and the way file linking is used doing this is typically i) fast and ii) consumes very little additional disk space. E.g.

conda create -n myspecialenv -c bioconda -c conda-forge python=3.5 pandas beautifulsoup seaborn nltk

The conda documentation has more details and examples.

pip, PyPI, and setuptools?

None of this is going to help with updating packages that have been installed from PyPI via pip or any packages installed using python setup.py install. conda list will give you some hints about the pip-based Python packages you have in an environment, but it won"t do anything special to update them.

Commercial use of Anaconda or Anaconda Enterprise

It is pretty much exactly the same story, with the exception that you may not be able to update the base environment if it was installed by someone else (say to /opt/anaconda/latest). If you"re not able to update the environments you are using you should be able to clone and then update:

conda create -n myenv --clone base
conda update -n myenv --all

Answer #5

First consider if you really need to iterate over rows in a DataFrame. See this answer for alternatives.

If you still need to iterate over rows, you can use methods below. Note some important caveats which are not mentioned in any of the other answers.

itertuples() is supposed to be faster than iterrows()

But be aware, according to the docs (pandas 0.24.2 at the moment):

  • iterrows: dtype might not match from row to row

    Because iterrows returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames). To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the values and which is generally much faster than iterrows()

  • iterrows: Do not modify rows

    You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.

    Use DataFrame.apply() instead:

    new_df = df.apply(lambda x: x * 2)
    
  • itertuples:

    The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore. With a large number of columns (>255), regular tuples are returned.

See pandas docs on iteration for more details.

Answer #6

There are many ways to do that:

  • Option 1. Using selectExpr.

     data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], 
                                       ["Name", "askdaosdka"])
     data.show()
     data.printSchema()
    
     # Output
     #+-------+----------+
     #|   Name|askdaosdka|
     #+-------+----------+
     #|Alberto|         2|
     #| Dakota|         2|
     #+-------+----------+
    
     #root
     # |-- Name: string (nullable = true)
     # |-- askdaosdka: long (nullable = true)
    
     df = data.selectExpr("Name as name", "askdaosdka as age")
     df.show()
     df.printSchema()
    
     # Output
     #+-------+---+
     #|   name|age|
     #+-------+---+
     #|Alberto|  2|
     #| Dakota|  2|
     #+-------+---+
    
     #root
     # |-- name: string (nullable = true)
     # |-- age: long (nullable = true)
    
  • Option 2. Using withColumnRenamed, notice that this method allows you to "overwrite" the same column. For Python3, replace xrange with range.

     from functools import reduce
    
     oldColumns = data.schema.names
     newColumns = ["name", "age"]
    
     df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
     df.printSchema()
     df.show()
    
  • Option 3. using alias, in Scala you can also use as.

     from pyspark.sql.functions import col
    
     data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
     data.show()
    
     # Output
     #+-------+---+
     #|   name|age|
     #+-------+---+
     #|Alberto|  2|
     #| Dakota|  2|
     #+-------+---+
    
  • Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables.

     sqlContext.registerDataFrameAsTable(data, "myTable")
     df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
    
     df2.show()
    
     # Output
     #+-------+---+
     #|   name|age|
     #+-------+---+
     #|Alberto|  2|
     #| Dakota|  2|
     #+-------+---+
    

Answer #7

Explain all in Python?

I keep seeing the variable __all__ set in different __init__.py files.

What does this do?

What does __all__ do?

It declares the semantically "public" names from a module. If there is a name in __all__, users are expected to use it, and they can have the expectation that it will not change.

It also will have programmatic effects:

import *

__all__ in a module, e.g. module.py:

__all__ = ["foo", "Bar"]

means that when you import * from the module, only those names in the __all__ are imported:

from module import *               # imports foo and Bar

Documentation tools

Documentation and code autocompletion tools may (in fact, should) also inspect the __all__ to determine what names to show as available from a module.

__init__.py makes a directory a Python package

From the docs:

The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path.

In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable.

So the __init__.py can declare the __all__ for a package.

Managing an API:

A package is typically made up of modules that may import one another, but that are necessarily tied together with an __init__.py file. That file is what makes the directory an actual Python package. For example, say you have the following files in a package:

package
├── __init__.py
├── module_1.py
└── module_2.py

Let"s create these files with Python so you can follow along - you could paste the following into a Python 3 shell:

from pathlib import Path

package = Path("package")
package.mkdir()

(package / "__init__.py").write_text("""
from .module_1 import *
from .module_2 import *
""")

package_module_1 = package / "module_1.py"
package_module_1.write_text("""
__all__ = ["foo"]
imp_detail1 = imp_detail2 = imp_detail3 = None
def foo(): pass
""")

package_module_2 = package / "module_2.py"
package_module_2.write_text("""
__all__ = ["Bar"]
imp_detail1 = imp_detail2 = imp_detail3 = None
class Bar: pass
""")

And now you have presented a complete api that someone else can use when they import your package, like so:

import package
package.foo()
package.Bar()

And the package won"t have all the other implementation details you used when creating your modules cluttering up the package namespace.

__all__ in __init__.py

After more work, maybe you"ve decided that the modules are too big (like many thousands of lines?) and need to be split up. So you do the following:

package
├── __init__.py
├── module_1
│   ├── foo_implementation.py
│   └── __init__.py
└── module_2
    ├── Bar_implementation.py
    └── __init__.py

First make the subpackage directories with the same names as the modules:

subpackage_1 = package / "module_1"
subpackage_1.mkdir()
subpackage_2 = package / "module_2"
subpackage_2.mkdir()

Move the implementations:

package_module_1.rename(subpackage_1 / "foo_implementation.py")
package_module_2.rename(subpackage_2 / "Bar_implementation.py")

create __init__.pys for the subpackages that declare the __all__ for each:

(subpackage_1 / "__init__.py").write_text("""
from .foo_implementation import *
__all__ = ["foo"]
""")
(subpackage_2 / "__init__.py").write_text("""
from .Bar_implementation import *
__all__ = ["Bar"]
""")

And now you still have the api provisioned at the package level:

>>> import package
>>> package.foo()
>>> package.Bar()
<package.module_2.Bar_implementation.Bar object at 0x7f0c2349d210>

And you can easily add things to your API that you can manage at the subpackage level instead of the subpackage"s module level. If you want to add a new name to the API, you simply update the __init__.py, e.g. in module_2:

from .Bar_implementation import *
from .Baz_implementation import *
__all__ = ["Bar", "Baz"]

And if you"re not ready to publish Baz in the top level API, in your top level __init__.py you could have:

from .module_1 import *       # also constrained by __all__"s
from .module_2 import *       # in the __init__.py"s
__all__ = ["foo", "Bar"]     # further constraining the names advertised

and if your users are aware of the availability of Baz, they can use it:

import package
package.Baz()

but if they don"t know about it, other tools (like pydoc) won"t inform them.

You can later change that when Baz is ready for prime time:

from .module_1 import *
from .module_2 import *
__all__ = ["foo", "Bar", "Baz"]

Prefixing _ versus __all__:

By default, Python will export all names that do not start with an _. You certainly could rely on this mechanism. Some packages in the Python standard library, in fact, do rely on this, but to do so, they alias their imports, for example, in ctypes/__init__.py:

import os as _os, sys as _sys

Using the _ convention can be more elegant because it removes the redundancy of naming the names again. But it adds the redundancy for imports (if you have a lot of them) and it is easy to forget to do this consistently - and the last thing you want is to have to indefinitely support something you intended to only be an implementation detail, just because you forgot to prefix an _ when naming a function.

I personally write an __all__ early in my development lifecycle for modules so that others who might use my code know what they should use and not use.

Most packages in the standard library also use __all__.

When avoiding __all__ makes sense

It makes sense to stick to the _ prefix convention in lieu of __all__ when:

  • You"re still in early development mode and have no users, and are constantly tweaking your API.
  • Maybe you do have users, but you have unittests that cover the API, and you"re still actively adding to the API and tweaking in development.

An export decorator

The downside of using __all__ is that you have to write the names of functions and classes being exported twice - and the information is kept separate from the definitions. We could use a decorator to solve this problem.

I got the idea for such an export decorator from David Beazley"s talk on packaging. This implementation seems to work well in CPython"s traditional importer. If you have a special import hook or system, I do not guarantee it, but if you adopt it, it is fairly trivial to back out - you"ll just need to manually add the names back into the __all__

So in, for example, a utility library, you would define the decorator:

import sys

def export(fn):
    mod = sys.modules[fn.__module__]
    if hasattr(mod, "__all__"):
        mod.__all__.append(fn.__name__)
    else:
        mod.__all__ = [fn.__name__]
    return fn

and then, where you would define an __all__, you do this:

$ cat > main.py
from lib import export
__all__ = [] # optional - we create a list if __all__ is not there.

@export
def foo(): pass

@export
def bar():
    "bar"

def main():
    print("main")

if __name__ == "__main__":
    main()

And this works fine whether run as main or imported by another function.

$ cat > run.py
import main
main.main()

$ python run.py
main

And API provisioning with import * will work too:

$ cat > run.py
from main import *
foo()
bar()
main() # expected to error here, not exported

$ python run.py
Traceback (most recent call last):
  File "run.py", line 4, in <module>
    main() # expected to error here, not exported
NameError: name "main" is not defined

Answer #8

I know object columns type makes the data hard to convert with a pandas function. When I received the data like this, the first thing that came to mind was to "flatten" or unnest the columns .

I am using pandas and python functions for this type of question. If you are worried about the speed of the above solutions, check user3483203"s answer, since it"s using numpy and most of the time numpy is faster . I recommend Cpython and numba if speed matters.


Method 0 [pandas >= 0.25]
Starting from pandas 0.25, if you only need to explode one column, you can use the pandas.DataFrame.explode function:

df.explode("B")

       A  B
    0  1  1
    1  1  2
    0  2  1
    1  2  2

Given a dataframe with an empty list or a NaN in the column. An empty list will not cause an issue, but a NaN will need to be filled with a list

df = pd.DataFrame({"A": [1, 2, 3, 4],"B": [[1, 2], [1, 2], [], np.nan]})
df.B = df.B.fillna({i: [] for i in df.index})  # replace NaN with []
df.explode("B")

   A    B
0  1    1
0  1    2
1  2    1
1  2    2
2  3  NaN
3  4  NaN

Method 1
apply + pd.Series (easy to understand but in terms of performance not recommended . )

df.set_index("A").B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0:"B"})
Out[463]: 
   A  B
0  1  1
1  1  2
0  2  1
1  2  2

Method 2
Using repeat with DataFrame constructor , re-create your dataframe (good at performance, not good at multiple columns )

df=pd.DataFrame({"A":df.A.repeat(df.B.str.len()),"B":np.concatenate(df.B.values)})
df
Out[465]: 
   A  B
0  1  1
0  1  2
1  2  1
1  2  2

Method 2.1
for example besides A we have A.1 .....A.n. If we still use the method(Method 2) above it is hard for us to re-create the columns one by one .

Solution : join or merge with the index after "unnest" the single columns

s=pd.DataFrame({"B":np.concatenate(df.B.values)},index=df.index.repeat(df.B.str.len()))
s.join(df.drop("B",1),how="left")
Out[477]: 
   B  A
0  1  1
0  2  1
1  1  2
1  2  2

If you need the column order exactly the same as before, add reindex at the end.

s.join(df.drop("B",1),how="left").reindex(columns=df.columns)

Method 3
recreate the list

pd.DataFrame([[x] + [z] for x, y in df.values for z in y],columns=df.columns)
Out[488]: 
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

If more than two columns, use

s=pd.DataFrame([[x] + [z] for x, y in zip(df.index,df.B) for z in y])
s.merge(df,left_on=0,right_index=True)
Out[491]: 
   0  1  A       B
0  0  1  1  [1, 2]
1  0  2  1  [1, 2]
2  1  1  2  [1, 2]
3  1  2  2  [1, 2]

Method 4
using reindex or loc

df.reindex(df.index.repeat(df.B.str.len())).assign(B=np.concatenate(df.B.values))
Out[554]: 
   A  B
0  1  1
0  1  2
1  2  1
1  2  2

#df.loc[df.index.repeat(df.B.str.len())].assign(B=np.concatenate(df.B.values))

Method 5
when the list only contains unique values:

df=pd.DataFrame({"A":[1,2],"B":[[1,2],[3,4]]})
from collections import ChainMap
d = dict(ChainMap(*map(dict.fromkeys, df["B"], df["A"])))
pd.DataFrame(list(d.items()),columns=df.columns[::-1])
Out[574]: 
   B  A
0  1  1
1  2  1
2  3  2
3  4  2

Method 6
using numpy for high performance:

newvalues=np.dstack((np.repeat(df.A.values,list(map(len,df.B.values))),np.concatenate(df.B.values)))
pd.DataFrame(data=newvalues[0],columns=df.columns)
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Method 7
using base function itertools cycle and chain: Pure python solution just for fun

from itertools import cycle,chain
l=df.values.tolist()
l1=[list(zip([x[0]], cycle(x[1])) if len([x[0]]) > len(x[1]) else list(zip(cycle([x[0]]), x[1]))) for x in l]
pd.DataFrame(list(chain.from_iterable(l1)),columns=df.columns)
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Generalizing to multiple columns

df=pd.DataFrame({"A":[1,2],"B":[[1,2],[3,4]],"C":[[1,2],[3,4]]})
df
Out[592]: 
   A       B       C
0  1  [1, 2]  [1, 2]
1  2  [3, 4]  [3, 4]

Self-def function:

def unnesting(df, explode):
    idx = df.index.repeat(df[explode[0]].str.len())
    df1 = pd.concat([
        pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
    df1.index = idx

    return df1.join(df.drop(explode, 1), how="left")

        
unnesting(df,["B","C"])
Out[609]: 
   B  C  A
0  1  1  1
0  2  2  1
1  3  3  2
1  4  4  2

Column-wise Unnesting

All above method is talking about the vertical unnesting and explode , If you do need expend the list horizontal, Check with pd.DataFrame constructor

df.join(pd.DataFrame(df.B.tolist(),index=df.index).add_prefix("B_"))
Out[33]: 
   A       B       C  B_0  B_1
0  1  [1, 2]  [1, 2]    1    2
1  2  [3, 4]  [3, 4]    3    4

Updated function

def unnesting(df, explode, axis):
    if axis==1:
        idx = df.index.repeat(df[explode[0]].str.len())
        df1 = pd.concat([
            pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
        df1.index = idx

        return df1.join(df.drop(explode, 1), how="left")
    else :
        df1 = pd.concat([
                         pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1)
        return df1.join(df.drop(explode, 1), how="left")

Test Output

unnesting(df, ["B","C"], axis=0)
Out[36]: 
   B0  B1  C0  C1  A
0   1   2   1   2  1
1   3   4   3   4  2

Update 2021-02-17 with original explode function

def unnesting(df, explode, axis):
    if axis==1:
        df1 = pd.concat([df[x].explode() for x in explode], axis=1)
        return df1.join(df.drop(explode, 1), how="left")
    else :
        df1 = pd.concat([
                         pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1)
        return df1.join(df.drop(explode, 1), how="left")

Answer #9

You cannot add an arbitrary column to a DataFrame in Spark. New columns can be created only by using literals (other literal types are described in How to add a constant column in a Spark DataFrame?)

from pyspark.sql.functions import lit

df = sqlContext.createDataFrame(
    [(1, "a", 23.0), (3, "B", -23.0)], ("x1", "x2", "x3"))

df_with_x4 = df.withColumn("x4", lit(0))
df_with_x4.show()

## +---+---+-----+---+
## | x1| x2|   x3| x4|
## +---+---+-----+---+
## |  1|  a| 23.0|  0|
## |  3|  B|-23.0|  0|
## +---+---+-----+---+

transforming an existing column:

from pyspark.sql.functions import exp

df_with_x5 = df_with_x4.withColumn("x5", exp("x3"))
df_with_x5.show()

## +---+---+-----+---+--------------------+
## | x1| x2|   x3| x4|                  x5|
## +---+---+-----+---+--------------------+
## |  1|  a| 23.0|  0| 9.744803446248903E9|
## |  3|  B|-23.0|  0|1.026187963170189...|
## +---+---+-----+---+--------------------+

included using join:

from pyspark.sql.functions import exp

lookup = sqlContext.createDataFrame([(1, "foo"), (2, "bar")], ("k", "v"))
df_with_x6 = (df_with_x5
    .join(lookup, col("x1") == col("k"), "leftouter")
    .drop("k")
    .withColumnRenamed("v", "x6"))

## +---+---+-----+---+--------------------+----+
## | x1| x2|   x3| x4|                  x5|  x6|
## +---+---+-----+---+--------------------+----+
## |  1|  a| 23.0|  0| 9.744803446248903E9| foo|
## |  3|  B|-23.0|  0|1.026187963170189...|null|
## +---+---+-----+---+--------------------+----+

or generated with function / udf:

from pyspark.sql.functions import rand

df_with_x7 = df_with_x6.withColumn("x7", rand())
df_with_x7.show()

## +---+---+-----+---+--------------------+----+-------------------+
## | x1| x2|   x3| x4|                  x5|  x6|                 x7|
## +---+---+-----+---+--------------------+----+-------------------+
## |  1|  a| 23.0|  0| 9.744803446248903E9| foo|0.41930610446846617|
## |  3|  B|-23.0|  0|1.026187963170189...|null|0.37801881545497873|
## +---+---+-----+---+--------------------+----+-------------------+

Performance-wise, built-in functions (pyspark.sql.functions), which map to Catalyst expression, are usually preferred over Python user defined functions.

If you want to add content of an arbitrary RDD as a column you can

Answer #10

df = df.withColumnRenamed("colName", "newColName")
       .withColumnRenamed("colName2", "newColName2")

Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names.

Tutorials