Python | hasattr () method

hasattr | Python Methods and Functions

hasattr () — it is a built-in utility function in Python that is used in many day-to-day software applications. 
His main task is — check if the object has the given named attribute and return true if present, false otherwise.

Syntax: hasattr (obj, key)

obj: The object whose which attribute has to be checked.
key: Attribute which needs to be checked.

Returns: Returns True, if attribute is present else returns False.

Code # 1: Demonstration of hasattr () work

# Python code for demonstration
# hasattr () works

# class declaration

class GfG:

  name = " Python.Engineering "

  age = 24

# object initialization

obj = GfG ()

# using hasattr () to check the name

print ( "Does name exist? " + str ( hasattr (obj, ' name' )))

# using hasattr () to check the motto

print ( "Does motto exist?" + str ( hasattr (obj, 'motto' )))


 Does name exist? True Does motto exist? False 

Code # 2 : Performance Analysis

< code class = "comments"> # Python code for demo
# hasattr () performance analysis

import time 

# class declaration

class GfG:

name = "Python.Engineering "

  age = 24

# object initialization

obj = GfG ()

# using hasattr to check if visa

start_hasattr = time.time ( )

if ( hasattr (obj, 'motto' )):

print ( "Motto is there" )

else :

print ( "No Motto" )


print ( "Time to execute hasattr:" + str (time.time () - start_hasattr))

# use try / in addition to check the motto

start_try = time.time ()

try :

print (obj.motto)

print ( "Motto is there" )

except AttributeError:

print ( " No Motto " )

print ( "Time to execute try:" + str (time.time () - start_try))


 No Motto Time to execute hasattr: 5.245208740234375e-06 No Motto Time to execute try: 2.6226043701171875e-06 

Result: normal try / throw takes less time than hasattr (), but for code readability, hasattr () is always the best choice.

Applications: this function can be used to validate keys to avoid unnecessary errors when accessing missing keys. The hasattr () chaining is sometimes used to avoid entering one related attribute when there is no other.

Python | hasattr () method: StackOverflow Questions

Answer #1

How do I determine the size of an object in Python?

The answer, "Just use sys.getsizeof", is not a complete answer.

That answer does work for builtin objects directly, but it does not account for what those objects may contain, specifically, what types, such as custom objects, tuples, lists, dicts, and sets contain. They can contain instances each other, as well as numbers, strings and other objects.

A More Complete Answer

Using 64-bit Python 3.6 from the Anaconda distribution, with sys.getsizeof, I have determined the minimum size of the following objects, and note that sets and dicts preallocate space so empty ones don"t grow again until after a set amount (which may vary by implementation of the language):

Python 3:

Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

How do you interpret this? Well say you have a set with 10 items in it. If each item is 100 bytes each, how big is the whole data structure? The set is 736 itself because it has sized up one time to 736 bytes. Then you add the size of the items, so that"s 1736 bytes in total

Some caveats for function and class definitions:

Note each class definition has a proxy __dict__ (48 bytes) structure for class attrs. Each slot has a descriptor (like a property) in the class definition.

Slotted instances start out with 48 bytes on their first element, and increase by 8 each additional. Only empty slotted objects have 16 bytes, and an instance with no data makes very little sense.

Also, each function definition has code objects, maybe docstrings, and other possible attributes, even a __dict__.

Also note that we use sys.getsizeof() because we care about the marginal space usage, which includes the garbage collection overhead for the object, from the docs:

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Also note that resizing lists (e.g. repetitively appending to them) causes them to preallocate space, similarly to sets and dicts. From the listobj.c source code:

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won"t overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

Historical data

Python 2.7 analysis, confirmed with guppy.hpy and sys.getsizeof:

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                    mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

Note that dictionaries (but not sets) got a more compact representation in Python 3.6

I think 8 bytes per additional item to reference makes a lot of sense on a 64 bit machine. Those 8 bytes point to the place in memory the contained item is at. The 4 bytes are fixed width for unicode in Python 2, if I recall correctly, but in Python 3, str becomes a unicode of width equal to the max width of the characters.

And for more on slots, see this answer.

A More Complete Function

We want a function that searches the elements in lists, tuples, sets, dicts, obj.__dict__"s, and obj.__slots__, as well as other things we may not have yet thought of.

We want to rely on gc.get_referents to do this search because it works at the C level (making it very fast). The downside is that get_referents can return redundant members, so we need to ensure we don"t double count.

Classes, modules, and functions are singletons - they exist one time in memory. We"re not so interested in their size, as there"s not much we can do about them - they"re a part of the program. So we"ll avoid counting them if they happen to be referenced.

We"re going to use a blacklist of types so we don"t include the entire program in our size count.

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType

def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError("getsize() does not take argument of type: "+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                size += sys.getsizeof(obj)
        objects = get_referents(*need_referents)
    return size

To contrast this with the following whitelisted function, most objects know how to traverse themselves for the purposes of garbage collection (which is approximately what we"re looking for when we want to know how expensive in memory certain objects are. This functionality is used by gc.get_referents.) However, this measure is going to be much more expansive in scope than we intended if we are not careful.

For example, functions know quite a lot about the modules they are created in.

Another point of contrast is that strings that are keys in dictionaries are usually interned so they are not duplicated. Checking for id(key) will also allow us to avoid counting duplicates, which we do in the next section. The blacklist solution skips counting keys that are strings altogether.

Whitelisted Types, Recursive visitor

To cover most of these types myself, instead of relying on the gc module, I wrote this recursive function to try to estimate the size of most Python objects, including most builtins, types in the collections module, and custom types (slotted and otherwise).

This sort of function gives much more fine-grained control over the types we"re going to count for memory usage, but has the danger of leaving important types out:

import sys
from numbers import Number
from collections import deque
from import Set, Mapping

ZERO_DEPTH_BASES = (str, bytes, Number, range, bytearray)

def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        size = sys.getsizeof(obj)
        if isinstance(obj, ZERO_DEPTH_BASES):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, "items"):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, "items")())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, "__dict__"):
            size += inner(vars(obj))
        if hasattr(obj, "__slots__"): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

And I tested it rather casually (I should unittest it):

>>> getsize(["a", tuple("bcd"), Foo()])
>>> getsize(Foo())
>>> getsize(tuple("bcd"))
>>> getsize(["a", tuple("bcd"), Foo(), {"foo": "bar", "baz": "bar"}])
>>> getsize({"foo": "bar", "baz": "bar"})
>>> getsize({})
>>> getsize({"foo":"bar"})
>>> getsize("foo")
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
>>> getsize(Bar().__dict__)
>>> sys.getsizeof(Bar())
>>> getsize(Bar.__dict__)
>>> sys.getsizeof(Bar.__dict__)

This implementation breaks down on class definitions and function definitions because we don"t go after all of their attributes, but since they should only exist once in memory for the process, their size really doesn"t matter too much.

Answer #2

What does the yield keyword do in Python?

Answer Outline/Summary

  • A function with yield, when called, returns a Generator.
  • Generators are iterators because they implement the iterator protocol, so you can iterate over them.
  • A generator can also be sent information, making it conceptually a coroutine.
  • In Python 3, you can delegate from one generator to another in both directions with yield from.
  • (Appendix critiques a couple of answers, including the top one, and discusses the use of return in a generator.)


yield is only legal inside of a function definition, and the inclusion of yield in a function definition makes it return a generator.

The idea for generators comes from other languages (see footnote 1) with varying implementations. In Python"s Generators, the execution of the code is frozen at the point of the yield. When the generator is called (methods are discussed below) execution resumes and then freezes at the next yield.

yield provides an easy way of implementing the iterator protocol, defined by the following two methods: __iter__ and next (Python 2) or __next__ (Python 3). Both of those methods make an object an iterator that you could type-check with the Iterator Abstract Base Class from the collections module.

>>> def func():
...     yield "I am"
...     yield "a generator!"
>>> type(func)                 # A function with yield is still a function
<type "function">
>>> gen = func()
>>> type(gen)                  # but it returns a generator
<type "generator">
>>> hasattr(gen, "__iter__")   # that"s an iterable
>>> hasattr(gen, "next")       # and with .next (.__next__ in Python 3)
True                           # implements the iterator protocol.

The generator type is a sub-type of iterator:

>>> import collections, types
>>> issubclass(types.GeneratorType, collections.Iterator)

And if necessary, we can type-check like this:

>>> isinstance(gen, types.GeneratorType)
>>> isinstance(gen, collections.Iterator)

A feature of an Iterator is that once exhausted, you can"t reuse or reset it:

>>> list(gen)
["I am", "a generator!"]
>>> list(gen)

You"ll have to make another if you want to use its functionality again (see footnote 2):

>>> list(func())
["I am", "a generator!"]

One can yield data programmatically, for example:

def func(an_iterable):
    for item in an_iterable:
        yield item

The above simple generator is also equivalent to the below - as of Python 3.3 (and not available in Python 2), you can use yield from:

def func(an_iterable):
    yield from an_iterable

However, yield from also allows for delegation to subgenerators, which will be explained in the following section on cooperative delegation with sub-coroutines.


yield forms an expression that allows data to be sent into the generator (see footnote 3)

Here is an example, take note of the received variable, which will point to the data that is sent to the generator:

def bank_account(deposited, interest_rate):
    while True:
        calculated_interest = interest_rate * deposited 
        received = yield calculated_interest
        if received:
            deposited += received

>>> my_account = bank_account(1000, .05)

First, we must queue up the generator with the builtin function, next. It will call the appropriate next or __next__ method, depending on the version of Python you are using:

>>> first_year_interest = next(my_account)
>>> first_year_interest

And now we can send data into the generator. (Sending None is the same as calling next.) :

>>> next_year_interest = my_account.send(first_year_interest + 1000)
>>> next_year_interest

Cooperative Delegation to Sub-Coroutine with yield from

Now, recall that yield from is available in Python 3. This allows us to delegate coroutines to a subcoroutine:

def money_manager(expected_rate):
    # must receive deposited value from .send():
    under_management = yield                   # yield None to start.
    while True:
            additional_investment = yield expected_rate * under_management 
            if additional_investment:
                under_management += additional_investment
        except GeneratorExit:
            """TODO: write function to send unclaimed funds to state"""
            """TODO: write function to mail tax info to client"""

def investment_account(deposited, manager):
    """very simple model of an investment account that delegates to a manager"""
    # must queue up manager:
    next(manager)      # <- same as manager.send(None)
    # This is where we send the initial deposit to the manager:
        yield from manager
    except GeneratorExit:
        return manager.close()  # delegate?

And now we can delegate functionality to a sub-generator and it can be used by a generator just as above:

my_manager = money_manager(.06)
my_account = investment_account(1000, my_manager)
first_year_return = next(my_account) # -> 60.0

Now simulate adding another 1,000 to the account plus the return on the account (60.0):

next_year_return = my_account.send(first_year_return + 1000)
next_year_return # 123.6

You can read more about the precise semantics of yield from in PEP 380.

Other Methods: close and throw

The close method raises GeneratorExit at the point the function execution was frozen. This will also be called by __del__ so you can put any cleanup code where you handle the GeneratorExit:


You can also throw an exception which can be handled in the generator or propagated back to the user:

import sys
    raise ValueError


Traceback (most recent call last):
  File "<stdin>", line 4, in <module>
  File "<stdin>", line 6, in money_manager
  File "<stdin>", line 2, in <module>


I believe I have covered all aspects of the following question:

What does the yield keyword do in Python?

It turns out that yield does a lot. I"m sure I could add even more thorough examples to this. If you want more or have some constructive criticism, let me know by commenting below.


Critique of the Top/Accepted Answer**

  • It is confused on what makes an iterable, just using a list as an example. See my references above, but in summary: an iterable has an __iter__ method returning an iterator. An iterator provides a .next (Python 2 or .__next__ (Python 3) method, which is implicitly called by for loops until it raises StopIteration, and once it does, it will continue to do so.
  • It then uses a generator expression to describe what a generator is. Since a generator is simply a convenient way to create an iterator, it only confuses the matter, and we still have not yet gotten to the yield part.
  • In Controlling a generator exhaustion he calls the .next method, when instead he should use the builtin function, next. It would be an appropriate layer of indirection, because his code does not work in Python 3.
  • Itertools? This was not relevant to what yield does at all.
  • No discussion of the methods that yield provides along with the new functionality yield from in Python 3. The top/accepted answer is a very incomplete answer.

Critique of answer suggesting yield in a generator expression or comprehension.

The grammar currently allows any expression in a list comprehension.

expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
                     ("=" (yield_expr|testlist_star_expr))*)
yield_expr: "yield" [yield_arg]
yield_arg: "from" test | testlist

Since yield is an expression, it has been touted by some as interesting to use it in comprehensions or generator expression - in spite of citing no particularly good use-case.

The CPython core developers are discussing deprecating its allowance. Here"s a relevant post from the mailing list:

On 30 January 2017 at 19:05, Brett Cannon wrote:

On Sun, 29 Jan 2017 at 16:39 Craig Rodrigues wrote:

I"m OK with either approach. Leaving things the way they are in Python 3 is no good, IMHO.

My vote is it be a SyntaxError since you"re not getting what you expect from the syntax.

I"d agree that"s a sensible place for us to end up, as any code relying on the current behaviour is really too clever to be maintainable.

In terms of getting there, we"ll likely want:

  • SyntaxWarning or DeprecationWarning in 3.7
  • Py3k warning in 2.7.x
  • SyntaxError in 3.8

Cheers, Nick.

-- Nick Coghlan | ncoghlan at | Brisbane, Australia

Further, there is an outstanding issue (10544) which seems to be pointing in the direction of this never being a good idea (PyPy, a Python implementation written in Python, is already raising syntax warnings.)

Bottom line, until the developers of CPython tell us otherwise: Don"t put yield in a generator expression or comprehension.

The return statement in a generator

In Python 2:

In a generator function, the return statement is not allowed to include an expression_list. In that context, a bare return indicates that the generator is done and will cause StopIteration to be raised.

An expression_list is basically any number of expressions separated by commas - essentially, in Python 2, you can stop the generator with return, but you can"t return a value.

In Python 3:

In a generator function, the return statement indicates that the generator is done and will cause StopIteration to be raised. The returned value (if any) is used as an argument to construct StopIteration and becomes the StopIteration.value attribute.


  1. The languages CLU, Sather, and Icon were referenced in the proposal to introduce the concept of generators to Python. The general idea is that a function can maintain internal state and yield intermediate data points on demand by the user. This promised to be superior in performance to other approaches, including Python threading, which isn"t even available on some systems.

  2. This means, for example, that range objects aren"t Iterators, even though they are iterable, because they can be reused. Like lists, their __iter__ methods return iterator objects.

yield was originally introduced as a statement, meaning that it could only appear at the beginning of a line in a code block. Now yield creates a yield expression. This change was proposed to allow a user to send data into the generator just as one might receive it. To send data, one must be able to assign it to something, and for that, a statement just won"t work.

Answer #3

Explain all in Python?

I keep seeing the variable __all__ set in different files.

What does this do?

What does __all__ do?

It declares the semantically "public" names from a module. If there is a name in __all__, users are expected to use it, and they can have the expectation that it will not change.

It also will have programmatic effects:

import *

__all__ in a module, e.g.

__all__ = ["foo", "Bar"]

means that when you import * from the module, only those names in the __all__ are imported:

from module import *               # imports foo and Bar

Documentation tools

Documentation and code autocompletion tools may (in fact, should) also inspect the __all__ to determine what names to show as available from a module. makes a directory a Python package

From the docs:

The files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path.

In the simplest case, can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable.

So the can declare the __all__ for a package.

Managing an API:

A package is typically made up of modules that may import one another, but that are necessarily tied together with an file. That file is what makes the directory an actual Python package. For example, say you have the following files in a package:


Let"s create these files with Python so you can follow along - you could paste the following into a Python 3 shell:

from pathlib import Path

package = Path("package")

(package / "").write_text("""
from .module_1 import *
from .module_2 import *

package_module_1 = package / ""
__all__ = ["foo"]
imp_detail1 = imp_detail2 = imp_detail3 = None
def foo(): pass

package_module_2 = package / ""
__all__ = ["Bar"]
imp_detail1 = imp_detail2 = imp_detail3 = None
class Bar: pass

And now you have presented a complete api that someone else can use when they import your package, like so:

import package

And the package won"t have all the other implementation details you used when creating your modules cluttering up the package namespace.

__all__ in

After more work, maybe you"ve decided that the modules are too big (like many thousands of lines?) and need to be split up. So you do the following:

├── module_1
│   ├──
│   └──
└── module_2

First make the subpackage directories with the same names as the modules:

subpackage_1 = package / "module_1"
subpackage_2 = package / "module_2"

Move the implementations:

package_module_1.rename(subpackage_1 / "")
package_module_2.rename(subpackage_2 / "")

create __init__.pys for the subpackages that declare the __all__ for each:

(subpackage_1 / "").write_text("""
from .foo_implementation import *
__all__ = ["foo"]
(subpackage_2 / "").write_text("""
from .Bar_implementation import *
__all__ = ["Bar"]

And now you still have the api provisioned at the package level:

>>> import package
>>> package.Bar()
<package.module_2.Bar_implementation.Bar object at 0x7f0c2349d210>

And you can easily add things to your API that you can manage at the subpackage level instead of the subpackage"s module level. If you want to add a new name to the API, you simply update the, e.g. in module_2:

from .Bar_implementation import *
from .Baz_implementation import *
__all__ = ["Bar", "Baz"]

And if you"re not ready to publish Baz in the top level API, in your top level you could have:

from .module_1 import *       # also constrained by __all__"s
from .module_2 import *       # in the"s
__all__ = ["foo", "Bar"]     # further constraining the names advertised

and if your users are aware of the availability of Baz, they can use it:

import package

but if they don"t know about it, other tools (like pydoc) won"t inform them.

You can later change that when Baz is ready for prime time:

from .module_1 import *
from .module_2 import *
__all__ = ["foo", "Bar", "Baz"]

Prefixing _ versus __all__:

By default, Python will export all names that do not start with an _. You certainly could rely on this mechanism. Some packages in the Python standard library, in fact, do rely on this, but to do so, they alias their imports, for example, in ctypes/

import os as _os, sys as _sys

Using the _ convention can be more elegant because it removes the redundancy of naming the names again. But it adds the redundancy for imports (if you have a lot of them) and it is easy to forget to do this consistently - and the last thing you want is to have to indefinitely support something you intended to only be an implementation detail, just because you forgot to prefix an _ when naming a function.

I personally write an __all__ early in my development lifecycle for modules so that others who might use my code know what they should use and not use.

Most packages in the standard library also use __all__.

When avoiding __all__ makes sense

It makes sense to stick to the _ prefix convention in lieu of __all__ when:

  • You"re still in early development mode and have no users, and are constantly tweaking your API.
  • Maybe you do have users, but you have unittests that cover the API, and you"re still actively adding to the API and tweaking in development.

An export decorator

The downside of using __all__ is that you have to write the names of functions and classes being exported twice - and the information is kept separate from the definitions. We could use a decorator to solve this problem.

I got the idea for such an export decorator from David Beazley"s talk on packaging. This implementation seems to work well in CPython"s traditional importer. If you have a special import hook or system, I do not guarantee it, but if you adopt it, it is fairly trivial to back out - you"ll just need to manually add the names back into the __all__

So in, for example, a utility library, you would define the decorator:

import sys

def export(fn):
    mod = sys.modules[fn.__module__]
    if hasattr(mod, "__all__"):
        mod.__all__ = [fn.__name__]
    return fn

and then, where you would define an __all__, you do this:

$ cat >
from lib import export
__all__ = [] # optional - we create a list if __all__ is not there.

def foo(): pass

def bar():

def main():

if __name__ == "__main__":

And this works fine whether run as main or imported by another function.

$ cat >
import main

$ python

And API provisioning with import * will work too:

$ cat >
from main import *
main() # expected to error here, not exported

$ python
Traceback (most recent call last):
  File "", line 4, in <module>
    main() # expected to error here, not exported
NameError: name "main" is not defined

Answer #4


Call the is_path_exists_or_creatable() function defined below.

Strictly Python 3. That"s just how we roll.

A Tale of Two Questions

The question of "How do I test pathname validity and, for valid pathnames, the existence or writability of those paths?" is clearly two separate questions. Both are interesting, and neither have received a genuinely satisfactory answer here... or, well, anywhere that I could grep.

vikki"s answer probably hews the closest, but has the remarkable disadvantages of:

  • Needlessly opening (...and then failing to reliably close) file handles.
  • Needlessly writing (...and then failing to reliable close or delete) 0-byte files.
  • Ignoring OS-specific errors differentiating between non-ignorable invalid pathnames and ignorable filesystem issues. Unsurprisingly, this is critical under Windows. (See below.)
  • Ignoring race conditions resulting from external processes concurrently (re)moving parent directories of the pathname to be tested. (See below.)
  • Ignoring connection timeouts resulting from this pathname residing on stale, slow, or otherwise temporarily inaccessible filesystems. This could expose public-facing services to potential DoS-driven attacks. (See below.)

We"re gonna fix all that.

Question #0: What"s Pathname Validity Again?

Before hurling our fragile meat suits into the python-riddled moshpits of pain, we should probably define what we mean by "pathname validity." What defines validity, exactly?

By "pathname validity," we mean the syntactic correctness of a pathname with respect to the root filesystem of the current system – regardless of whether that path or parent directories thereof physically exist. A pathname is syntactically correct under this definition if it complies with all syntactic requirements of the root filesystem.

By "root filesystem," we mean:

  • On POSIX-compatible systems, the filesystem mounted to the root directory (/).
  • On Windows, the filesystem mounted to %HOMEDRIVE%, the colon-suffixed drive letter containing the current Windows installation (typically but not necessarily C:).

The meaning of "syntactic correctness," in turn, depends on the type of root filesystem. For ext4 (and most but not all POSIX-compatible) filesystems, a pathname is syntactically correct if and only if that pathname:

  • Contains no null bytes (i.e., x00 in Python). This is a hard requirement for all POSIX-compatible filesystems.
  • Contains no path components longer than 255 bytes (e.g., "a"*256 in Python). A path component is a longest substring of a pathname containing no / character (e.g., bergtatt, ind, i, and fjeldkamrene in the pathname /bergtatt/ind/i/fjeldkamrene).

Syntactic correctness. Root filesystem. That"s it.

Question #1: How Now Shall We Do Pathname Validity?

Validating pathnames in Python is surprisingly non-intuitive. I"m in firm agreement with Fake Name here: the official os.path package should provide an out-of-the-box solution for this. For unknown (and probably uncompelling) reasons, it doesn"t. Fortunately, unrolling your own ad-hoc solution isn"t that gut-wrenching...

O.K., it actually is. It"s hairy; it"s nasty; it probably chortles as it burbles and giggles as it glows. But what you gonna do? Nuthin".

We"ll soon descend into the radioactive abyss of low-level code. But first, let"s talk high-level shop. The standard os.stat() and os.lstat() functions raise the following exceptions when passed invalid pathnames:

  • For pathnames residing in non-existing directories, instances of FileNotFoundError.
  • For pathnames residing in existing directories:
    • Under Windows, instances of WindowsError whose winerror attribute is 123 (i.e., ERROR_INVALID_NAME).
    • Under all other OSes:
    • For pathnames containing null bytes (i.e., "x00"), instances of TypeError.
    • For pathnames containing path components longer than 255 bytes, instances of OSError whose errcode attribute is:
      • Under SunOS and the *BSD family of OSes, errno.ERANGE. (This appears to be an OS-level bug, otherwise referred to as "selective interpretation" of the POSIX standard.)
      • Under all other OSes, errno.ENAMETOOLONG.

Crucially, this implies that only pathnames residing in existing directories are validatable. The os.stat() and os.lstat() functions raise generic FileNotFoundError exceptions when passed pathnames residing in non-existing directories, regardless of whether those pathnames are invalid or not. Directory existence takes precedence over pathname invalidity.

Does this mean that pathnames residing in non-existing directories are not validatable? Yes – unless we modify those pathnames to reside in existing directories. Is that even safely feasible, however? Shouldn"t modifying a pathname prevent us from validating the original pathname?

To answer this question, recall from above that syntactically correct pathnames on the ext4 filesystem contain no path components (A) containing null bytes or (B) over 255 bytes in length. Hence, an ext4 pathname is valid if and only if all path components in that pathname are valid. This is true of most real-world filesystems of interest.

Does that pedantic insight actually help us? Yes. It reduces the larger problem of validating the full pathname in one fell swoop to the smaller problem of only validating all path components in that pathname. Any arbitrary pathname is validatable (regardless of whether that pathname resides in an existing directory or not) in a cross-platform manner by following the following algorithm:

  1. Split that pathname into path components (e.g., the pathname /troldskog/faren/vild into the list ["", "troldskog", "faren", "vild"]).
  2. For each such component:
    1. Join the pathname of a directory guaranteed to exist with that component into a new temporary pathname (e.g., /troldskog) .
    2. Pass that pathname to os.stat() or os.lstat(). If that pathname and hence that component is invalid, this call is guaranteed to raise an exception exposing the type of invalidity rather than a generic FileNotFoundError exception. Why? Because that pathname resides in an existing directory. (Circular logic is circular.)

Is there a directory guaranteed to exist? Yes, but typically only one: the topmost directory of the root filesystem (as defined above).

Passing pathnames residing in any other directory (and hence not guaranteed to exist) to os.stat() or os.lstat() invites race conditions, even if that directory was previously tested to exist. Why? Because external processes cannot be prevented from concurrently removing that directory after that test has been performed but before that pathname is passed to os.stat() or os.lstat(). Unleash the dogs of mind-fellating insanity!

There exists a substantial side benefit to the above approach as well: security. (Isn"t that nice?) Specifically:

Front-facing applications validating arbitrary pathnames from untrusted sources by simply passing such pathnames to os.stat() or os.lstat() are susceptible to Denial of Service (DoS) attacks and other black-hat shenanigans. Malicious users may attempt to repeatedly validate pathnames residing on filesystems known to be stale or otherwise slow (e.g., NFS Samba shares); in that case, blindly statting incoming pathnames is liable to either eventually fail with connection timeouts or consume more time and resources than your feeble capacity to withstand unemployment.

The above approach obviates this by only validating the path components of a pathname against the root directory of the root filesystem. (If even that"s stale, slow, or inaccessible, you"ve got larger problems than pathname validation.)

Lost? Great. Let"s begin. (Python 3 assumed. See "What Is Fragile Hope for 300, leycec?")

import errno, os

# Sadly, Python fails to provide the following magic number for us.
Windows-specific error code indicating an invalid pathname.

See Also
    Official listing of all such codes.

def is_pathname_valid(pathname: str) -> bool:
    `True` if the passed pathname is a valid pathname for the current OS;
    `False` otherwise.
    # If this pathname is either not a string or is but is empty, this pathname
    # is invalid.
        if not isinstance(pathname, str) or not pathname:
            return False

        # Strip this pathname"s Windows-specific drive specifier (e.g., `C:`)
        # if any. Since Windows prohibits path components from containing `:`
        # characters, failing to strip this `:`-suffixed prefix would
        # erroneously invalidate all valid absolute Windows pathnames.
        _, pathname = os.path.splitdrive(pathname)

        # Directory guaranteed to exist. If the current OS is Windows, this is
        # the drive to which Windows was installed (e.g., the "%HOMEDRIVE%"
        # environment variable); else, the typical root directory.
        root_dirname = os.environ.get("HOMEDRIVE", "C:") 
            if sys.platform == "win32" else os.path.sep
        assert os.path.isdir(root_dirname)   # ...Murphy and her ironclad Law

        # Append a path separator to this directory if needed.
        root_dirname = root_dirname.rstrip(os.path.sep) + os.path.sep

        # Test whether each path component split from this pathname is valid or
        # not, ignoring non-existent and non-readable path components.
        for pathname_part in pathname.split(os.path.sep):
                os.lstat(root_dirname + pathname_part)
            # If an OS-specific exception is raised, its error code
            # indicates whether this pathname is valid or not. Unless this
            # is the case, this exception implies an ignorable kernel or
            # filesystem complaint (e.g., path not found or inaccessible).
            # Only the following exceptions indicate invalid pathnames:
            # * Instances of the Windows-specific "WindowsError" class
            #   defining the "winerror" attribute whose value is
            #   "ERROR_INVALID_NAME". Under Windows, "winerror" is more
            #   fine-grained and hence useful than the generic "errno"
            #   attribute. When a too-long pathname is passed, for example,
            #   "errno" is "ENOENT" (i.e., no such file or directory) rather
            #   than "ENAMETOOLONG" (i.e., file name too long).
            # * Instances of the cross-platform "OSError" class defining the
            #   generic "errno" attribute whose value is either:
            #   * Under most POSIX-compatible OSes, "ENAMETOOLONG".
            #   * Under some edge-case OSes (e.g., SunOS, *BSD), "ERANGE".
            except OSError as exc:
                if hasattr(exc, "winerror"):
                    if exc.winerror == ERROR_INVALID_NAME:
                        return False
                elif exc.errno in {errno.ENAMETOOLONG, errno.ERANGE}:
                    return False
    # If a "TypeError" exception was raised, it almost certainly has the
    # error message "embedded NUL character" indicating an invalid pathname.
    except TypeError as exc:
        return False
    # If no exception was raised, all path components and hence this
    # pathname itself are valid. (Praise be to the curmudgeonly python.)
        return True
    # If any other exception was raised, this is an unrelated fatal issue
    # (e.g., a bug). Permit this exception to unwind the call stack.
    # Did we mention this should be shipped with Python already?

Done. Don"t squint at that code. (It bites.)

Question #2: Possibly Invalid Pathname Existence or Creatability, Eh?

Testing the existence or creatability of possibly invalid pathnames is, given the above solution, mostly trivial. The little key here is to call the previously defined function before testing the passed path:

def is_path_creatable(pathname: str) -> bool:
    `True` if the current user has sufficient permissions to create the passed
    pathname; `False` otherwise.
    # Parent directory of the passed path. If empty, we substitute the current
    # working directory (CWD) instead.
    dirname = os.path.dirname(pathname) or os.getcwd()
    return os.access(dirname, os.W_OK)

def is_path_exists_or_creatable(pathname: str) -> bool:
    `True` if the passed pathname is a valid pathname for the current OS _and_
    either currently exists or is hypothetically creatable; `False` otherwise.

    This function is guaranteed to _never_ raise exceptions.
        # To prevent "os" module calls from raising undesirable exceptions on
        # invalid pathnames, is_pathname_valid() is explicitly called first.
        return is_pathname_valid(pathname) and (
            os.path.exists(pathname) or is_path_creatable(pathname))
    # Report failure on non-fatal filesystem complaints (e.g., connection
    # timeouts, permissions issues) implying this path to be inaccessible. All
    # other exceptions are unrelated fatal issues and should not be caught here.
    except OSError:
        return False

Done and done. Except not quite.

Question #3: Possibly Invalid Pathname Existence or Writability on Windows

There exists a caveat. Of course there does.

As the official os.access() documentation admits:

Note: I/O operations may fail even when os.access() indicates that they would succeed, particularly for operations on network filesystems which may have permissions semantics beyond the usual POSIX permission-bit model.

To no one"s surprise, Windows is the usual suspect here. Thanks to extensive use of Access Control Lists (ACL) on NTFS filesystems, the simplistic POSIX permission-bit model maps poorly to the underlying Windows reality. While this (arguably) isn"t Python"s fault, it might nonetheless be of concern for Windows-compatible applications.

If this is you, a more robust alternative is wanted. If the passed path does not exist, we instead attempt to create a temporary file guaranteed to be immediately deleted in the parent directory of that path – a more portable (if expensive) test of creatability:

import os, tempfile

def is_path_sibling_creatable(pathname: str) -> bool:
    `True` if the current user has sufficient permissions to create **siblings**
    (i.e., arbitrary files in the parent directory) of the passed pathname;
    `False` otherwise.
    # Parent directory of the passed path. If empty, we substitute the current
    # working directory (CWD) instead.
    dirname = os.path.dirname(pathname) or os.getcwd()

        # For safety, explicitly close and hence delete this temporary file
        # immediately after creating it in the passed path"s parent directory.
        with tempfile.TemporaryFile(dir=dirname): pass
        return True
    # While the exact type of exception raised by the above function depends on
    # the current version of the Python interpreter, all such types subclass the
    # following exception superclass.
    except EnvironmentError:
        return False

def is_path_exists_or_creatable_portable(pathname: str) -> bool:
    `True` if the passed pathname is a valid pathname on the current OS _and_
    either currently exists or is hypothetically creatable in a cross-platform
    manner optimized for POSIX-unfriendly filesystems; `False` otherwise.

    This function is guaranteed to _never_ raise exceptions.
        # To prevent "os" module calls from raising undesirable exceptions on
        # invalid pathnames, is_pathname_valid() is explicitly called first.
        return is_pathname_valid(pathname) and (
            os.path.exists(pathname) or is_path_sibling_creatable(pathname))
    # Report failure on non-fatal filesystem complaints (e.g., connection
    # timeouts, permissions issues) implying this path to be inaccessible. All
    # other exceptions are unrelated fatal issues and should not be caught here.
    except OSError:
        return False

Note, however, that even this may not be enough.

Thanks to User Access Control (UAC), the ever-inimicable Windows Vista and all subsequent iterations thereof blatantly lie about permissions pertaining to system directories. When non-Administrator users attempt to create files in either the canonical C:Windows or C:Windowssystem32 directories, UAC superficially permits the user to do so while actually isolating all created files into a "Virtual Store" in that user"s profile. (Who could have possibly imagined that deceiving users would have harmful long-term consequences?)

This is crazy. This is Windows.

Prove It

Dare we? It"s time to test-drive the above tests.

Since NULL is the only character prohibited in pathnames on UNIX-oriented filesystems, let"s leverage that to demonstrate the cold, hard truth – ignoring non-ignorable Windows shenanigans, which frankly bore and anger me in equal measure:

>>> print(""" valid? " + str(is_pathname_valid("")))
"" valid? True
>>> print("Null byte valid? " + str(is_pathname_valid("x00")))
Null byte valid? False
>>> print("Long path valid? " + str(is_pathname_valid("a" * 256)))
Long path valid? False
>>> print(""/dev" exists or creatable? " + str(is_path_exists_or_creatable("/dev")))
"/dev" exists or creatable? True
>>> print(""/dev/" exists or creatable? " + str(is_path_exists_or_creatable("/dev/")))
"/dev/" exists or creatable? False
>>> print("Null byte exists or creatable? " + str(is_path_exists_or_creatable("x00")))
Null byte exists or creatable? False

Beyond sanity. Beyond pain. You will find Python portability concerns.

Answer #5

I"d like to shed a little bit more light on the interplay of iter, __iter__ and __getitem__ and what happens behind the curtains. Armed with that knowledge, you will be able to understand why the best you can do is

    print("iteration will probably work")
except TypeError:
    print("not iterable")

I will list the facts first and then follow up with a quick reminder of what happens when you employ a for loop in python, followed by a discussion to illustrate the facts.


  1. You can get an iterator from any object o by calling iter(o) if at least one of the following conditions holds true:

    a) o has an __iter__ method which returns an iterator object. An iterator is any object with an __iter__ and a __next__ (Python 2: next) method.

    b) o has a __getitem__ method.

  2. Checking for an instance of Iterable or Sequence, or checking for the attribute __iter__ is not enough.

  3. If an object o implements only __getitem__, but not __iter__, iter(o) will construct an iterator that tries to fetch items from o by integer index, starting at index 0. The iterator will catch any IndexError (but no other errors) that is raised and then raises StopIteration itself.

  4. In the most general sense, there"s no way to check whether the iterator returned by iter is sane other than to try it out.

  5. If an object o implements __iter__, the iter function will make sure that the object returned by __iter__ is an iterator. There is no sanity check if an object only implements __getitem__.

  6. __iter__ wins. If an object o implements both __iter__ and __getitem__, iter(o) will call __iter__.

  7. If you want to make your own objects iterable, always implement the __iter__ method.

for loops

In order to follow along, you need an understanding of what happens when you employ a for loop in Python. Feel free to skip right to the next section if you already know.

When you use for item in o for some iterable object o, Python calls iter(o) and expects an iterator object as the return value. An iterator is any object which implements a __next__ (or next in Python 2) method and an __iter__ method.

By convention, the __iter__ method of an iterator should return the object itself (i.e. return self). Python then calls next on the iterator until StopIteration is raised. All of this happens implicitly, but the following demonstration makes it visible:

import random

class DemoIterable(object):
    def __iter__(self):
        print("__iter__ called")
        return DemoIterator()

class DemoIterator(object):
    def __iter__(self):
        return self

    def __next__(self):
        print("__next__ called")
        r = random.randint(1, 10)
        if r == 5:
            print("raising StopIteration")
            raise StopIteration
        return r

Iteration over a DemoIterable:

>>> di = DemoIterable()
>>> for x in di:
...     print(x)
__iter__ called
__next__ called
__next__ called
__next__ called
__next__ called
__next__ called
__next__ called
raising StopIteration

Discussion and illustrations

On point 1 and 2: getting an iterator and unreliable checks

Consider the following class:

class BasicIterable(object):
    def __getitem__(self, item):
        if item == 3:
            raise IndexError
        return item

Calling iter with an instance of BasicIterable will return an iterator without any problems because BasicIterable implements __getitem__.

>>> b = BasicIterable()
>>> iter(b)
<iterator object at 0x7f1ab216e320>

However, it is important to note that b does not have the __iter__ attribute and is not considered an instance of Iterable or Sequence:

>>> from collections import Iterable, Sequence
>>> hasattr(b, "__iter__")
>>> isinstance(b, Iterable)
>>> isinstance(b, Sequence)

This is why Fluent Python by Luciano Ramalho recommends calling iter and handling the potential TypeError as the most accurate way to check whether an object is iterable. Quoting directly from the book:

As of Python 3.4, the most accurate way to check whether an object x is iterable is to call iter(x) and handle a TypeError exception if it isn’t. This is more accurate than using isinstance(x, abc.Iterable) , because iter(x) also considers the legacy __getitem__ method, while the Iterable ABC does not.

On point 3: Iterating over objects which only provide __getitem__, but not __iter__

Iterating over an instance of BasicIterable works as expected: Python constructs an iterator that tries to fetch items by index, starting at zero, until an IndexError is raised. The demo object"s __getitem__ method simply returns the item which was supplied as the argument to __getitem__(self, item) by the iterator returned by iter.

>>> b = BasicIterable()
>>> it = iter(b)
>>> next(it)
>>> next(it)
>>> next(it)
>>> next(it)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>

Note that the iterator raises StopIteration when it cannot return the next item and that the IndexError which is raised for item == 3 is handled internally. This is why looping over a BasicIterable with a for loop works as expected:

>>> for x in b:
...     print(x)

Here"s another example in order to drive home the concept of how the iterator returned by iter tries to access items by index. WrappedDict does not inherit from dict, which means instances won"t have an __iter__ method.

class WrappedDict(object): # note: no inheritance from dict!
    def __init__(self, dic):
        self._dict = dic

    def __getitem__(self, item):
            return self._dict[item] # delegate to dict.__getitem__
        except KeyError:
            raise IndexError

Note that calls to __getitem__ are delegated to dict.__getitem__ for which the square bracket notation is simply a shorthand.

>>> w = WrappedDict({-1: "not printed",
...                   0: "hi", 1: "StackOverflow", 2: "!",
...                   4: "not printed", 
...                   "x": "not printed"})
>>> for x in w:
...     print(x)

On point 4 and 5: iter checks for an iterator when it calls __iter__:

When iter(o) is called for an object o, iter will make sure that the return value of __iter__, if the method is present, is an iterator. This means that the returned object must implement __next__ (or next in Python 2) and __iter__. iter cannot perform any sanity checks for objects which only provide __getitem__, because it has no way to check whether the items of the object are accessible by integer index.

class FailIterIterable(object):
    def __iter__(self):
        return object() # not an iterator

class FailGetitemIterable(object):
    def __getitem__(self, item):
        raise Exception

Note that constructing an iterator from FailIterIterable instances fails immediately, while constructing an iterator from FailGetItemIterable succeeds, but will throw an Exception on the first call to __next__.

>>> fii = FailIterIterable()
>>> iter(fii)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: iter() returned non-iterator of type "object"
>>> fgi = FailGetitemIterable()
>>> it = iter(fgi)
>>> next(it)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/path/", line 42, in __getitem__
    raise Exception

On point 6: __iter__ wins

This one is straightforward. If an object implements __iter__ and __getitem__, iter will call __iter__. Consider the following class

class IterWinsDemo(object):
    def __iter__(self):
        return iter(["__iter__", "wins"])

    def __getitem__(self, item):
        return ["__getitem__", "wins"][item]

and the output when looping over an instance:

>>> iwd = IterWinsDemo()
>>> for x in iwd:
...     print(x)

On point 7: your iterable classes should implement __iter__

You might ask yourself why most builtin sequences like list implement an __iter__ method when __getitem__ would be sufficient.

class WrappedList(object): # note: no inheritance from list!
    def __init__(self, lst):
        self._list = lst

    def __getitem__(self, item):
        return self._list[item]

After all, iteration over instances of the class above, which delegates calls to __getitem__ to list.__getitem__ (using the square bracket notation), will work fine:

>>> wl = WrappedList(["A", "B", "C"])
>>> for x in wl:
...     print(x)

The reasons your custom iterables should implement __iter__ are as follows:

  1. If you implement __iter__, instances will be considered iterables, and isinstance(o, will return True.
  2. If the object returned by __iter__ is not an iterator, iter will fail immediately and raise a TypeError.
  3. The special handling of __getitem__ exists for backwards compatibility reasons. Quoting again from Fluent Python:

That is why any Python sequence is iterable: they all implement __getitem__ . In fact, the standard sequences also implement __iter__, and yours should too, because the special handling of __getitem__ exists for backward compatibility reasons and may be gone in the future (although it is not deprecated as I write this).

Answer #6

If you look at the documentation for the built-in errors, you"ll see that most Exception classes assign their first argument as a message attribute. Not all of them do though.

Notably,EnvironmentError (with subclasses IOError and OSError) has a first argument of errno, second of strerror. There is no message... strerror is roughly analogous to what would normally be a message.

More generally, subclasses of Exception can do whatever they want. They may or may not have a message attribute. Future built-in Exceptions may not have a message attribute. Any Exception subclass imported from third-party libraries or user code may not have a message attribute.

I think the proper way of handling this is to identify the specific Exception subclasses you want to catch, and then catch only those instead of everything with an except Exception, then utilize whatever attributes that specific subclass defines however you want.

If you must print something, I think that printing the caught Exception itself is most likely to do what you want, whether it has a message attribute or not.

You could also check for the message attribute if you wanted, like this, but I wouldn"t really suggest it as it just seems messy:

except Exception as e:
    # Just print(e) is cleaner and more likely what you want,
    # but if you insist on printing message specifically whenever possible...
    if hasattr(e, "message"):

Answer #7

How can I make as "perfect" a subclass of dict as possible?

The end goal is to have a simple dict in which the keys are lowercase.

  • If I override __getitem__/__setitem__, then get/set don"t work. How do I make them work? Surely I don"t need to implement them individually?

  • Am I preventing pickling from working, and do I need to implement __setstate__ etc?

  • Do I need repr, update and __init__?

  • Should I just use mutablemapping (it seems one shouldn"t use UserDict or DictMixin)? If so, how? The docs aren"t exactly enlightening.

The accepted answer would be my first approach, but since it has some issues, and since no one has addressed the alternative, actually subclassing a dict, I"m going to do that here.

What"s wrong with the accepted answer?

This seems like a rather simple request to me:

How can I make as "perfect" a subclass of dict as possible? The end goal is to have a simple dict in which the keys are lowercase.

The accepted answer doesn"t actually subclass dict, and a test for this fails:

>>> isinstance(MyTransformedDict([("Test", "test")]), dict)

Ideally, any type-checking code would be testing for the interface we expect, or an abstract base class, but if our data objects are being passed into functions that are testing for dict - and we can"t "fix" those functions, this code will fail.

Other quibbles one might make:

  • The accepted answer is also missing the classmethod: fromkeys.
  • The accepted answer also has a redundant __dict__ - therefore taking up more space in memory:

    >>> = "bar"
    >>> s.__dict__
    {"foo": "bar", "store": {"test": "test"}}

Actually subclassing dict

We can reuse the dict methods through inheritance. All we need to do is create an interface layer that ensures keys are passed into the dict in lowercase form if they are strings.

If I override __getitem__/__setitem__, then get/set don"t work. How do I make them work? Surely I don"t need to implement them individually?

Well, implementing them each individually is the downside to this approach and the upside to using MutableMapping (see the accepted answer), but it"s really not that much more work.

First, let"s factor out the difference between Python 2 and 3, create a singleton (_RaiseKeyError) to make sure we know if we actually get an argument to dict.pop, and create a function to ensure our string keys are lowercase:

from itertools import chain
try:              # Python 2
    str_base = basestring
    items = "iteritems"
except NameError: # Python 3
    str_base = str, bytes, bytearray
    items = "items"

_RaiseKeyError = object() # singleton for no-default behavior

def ensure_lower(maybe_str):
    """dict keys can be any hashable object - only call lower if str"""
    return maybe_str.lower() if isinstance(maybe_str, str_base) else maybe_str

Now we implement - I"m using super with the full arguments so that this code works for Python 2 and 3:

class LowerDict(dict):  # dicts take a mapping or iterable as their optional first argument
    __slots__ = () # no __dict__ - that would be redundant
    @staticmethod # because this doesn"t make sense as a global function.
    def _process_args(mapping=(), **kwargs):
        if hasattr(mapping, items):
            mapping = getattr(mapping, items)()
        return ((ensure_lower(k), v) for k, v in chain(mapping, getattr(kwargs, items)()))
    def __init__(self, mapping=(), **kwargs):
        super(LowerDict, self).__init__(self._process_args(mapping, **kwargs))
    def __getitem__(self, k):
        return super(LowerDict, self).__getitem__(ensure_lower(k))
    def __setitem__(self, k, v):
        return super(LowerDict, self).__setitem__(ensure_lower(k), v)
    def __delitem__(self, k):
        return super(LowerDict, self).__delitem__(ensure_lower(k))
    def get(self, k, default=None):
        return super(LowerDict, self).get(ensure_lower(k), default)
    def setdefault(self, k, default=None):
        return super(LowerDict, self).setdefault(ensure_lower(k), default)
    def pop(self, k, v=_RaiseKeyError):
        if v is _RaiseKeyError:
            return super(LowerDict, self).pop(ensure_lower(k))
        return super(LowerDict, self).pop(ensure_lower(k), v)
    def update(self, mapping=(), **kwargs):
        super(LowerDict, self).update(self._process_args(mapping, **kwargs))
    def __contains__(self, k):
        return super(LowerDict, self).__contains__(ensure_lower(k))
    def copy(self): # don"t delegate w/ super - dict.copy() -> dict :(
        return type(self)(self)
    def fromkeys(cls, keys, v=None):
        return super(LowerDict, cls).fromkeys((ensure_lower(k) for k in keys), v)
    def __repr__(self):
        return "{0}({1})".format(type(self).__name__, super(LowerDict, self).__repr__())

We use an almost boiler-plate approach for any method or special method that references a key, but otherwise, by inheritance, we get methods: len, clear, items, keys, popitem, and values for free. While this required some careful thought to get right, it is trivial to see that this works.

(Note that haskey was deprecated in Python 2, removed in Python 3.)

Here"s some usage:

>>> ld = LowerDict(dict(foo="bar"))
>>> ld["FOO"]
>>> ld["foo"]
>>> ld.pop("FoO")
>>> ld.setdefault("Foo")
>>> ld
{"foo": None}
>>> ld.get("Bar")
>>> ld.setdefault("Bar")
>>> ld
{"bar": None, "foo": None}
>>> ld.popitem()
("bar", None)

Am I preventing pickling from working, and do I need to implement __setstate__ etc?


And the dict subclass pickles just fine:

>>> import pickle
>>> pickle.dumps(ld)
>>> pickle.loads(pickle.dumps(ld))
{"foo": None}
>>> type(pickle.loads(pickle.dumps(ld)))
<class "__main__.LowerDict">


Do I need repr, update and __init__?

We defined update and __init__, but you have a beautiful __repr__ by default:

>>> ld # without __repr__ defined for the class, we get this
{"foo": None}

However, it"s good to write a __repr__ to improve the debugability of your code. The ideal test is eval(repr(obj)) == obj. If it"s easy to do for your code, I strongly recommend it:

>>> ld = LowerDict({})
>>> eval(repr(ld)) == ld
>>> ld = LowerDict(dict(a=1, b=2, c=3))
>>> eval(repr(ld)) == ld

You see, it"s exactly what we need to recreate an equivalent object - this is something that might show up in our logs or in backtraces:

>>> ld
LowerDict({"a": 1, "c": 3, "b": 2})


Should I just use mutablemapping (it seems one shouldn"t use UserDict or DictMixin)? If so, how? The docs aren"t exactly enlightening.

Yeah, these are a few more lines of code, but they"re intended to be comprehensive. My first inclination would be to use the accepted answer, and if there were issues with it, I"d then look at my answer - as it"s a little more complicated, and there"s no ABC to help me get my interface right.

Premature optimization is going for greater complexity in search of performance. MutableMapping is simpler - so it gets an immediate edge, all else being equal. Nevertheless, to lay out all the differences, let"s compare and contrast.

I should add that there was a push to put a similar dictionary into the collections module, but it was rejected. You should probably just do this instead:


It should be far more easily debugable.

Compare and contrast

There are 6 interface functions implemented with the MutableMapping (which is missing fromkeys) and 11 with the dict subclass. I don"t need to implement __iter__ or __len__, but instead I have to implement get, setdefault, pop, update, copy, __contains__, and fromkeys - but these are fairly trivial, since I can use inheritance for most of those implementations.

The MutableMapping implements some things in Python that dict implements in C - so I would expect a dict subclass to be more performant in some cases.

We get a free __eq__ in both approaches - both of which assume equality only if another dict is all lowercase - but again, I think the dict subclass will compare more quickly.


  • subclassing MutableMapping is simpler with fewer opportunities for bugs, but slower, takes more memory (see redundant dict), and fails isinstance(x, dict)
  • subclassing dict is faster, uses less memory, and passes isinstance(x, dict), but it has greater complexity to implement.

Which is more perfect? That depends on your definition of perfect.

Answer #8

I was asked the same question recently, and came up with several answers. I hope it"s OK to revive this thread, as I wanted to elaborate on a few of the use cases mentioned, and add a few new ones.

Most metaclasses I"ve seen do one of two things:

  1. Registration (adding a class to a data structure):

    models = {}
    class ModelMetaclass(type):
        def __new__(meta, name, bases, attrs):
            models[name] = cls = type.__new__(meta, name, bases, attrs)
            return cls
    class Model(object):
        __metaclass__ = ModelMetaclass

    Whenever you subclass Model, your class is registered in the models dictionary:

    >>> class A(Model):
    ...     pass
    >>> class B(A):
    ...     pass
    >>> models
    {"A": <__main__.A class at 0x...>,
     "B": <__main__.B class at 0x...>}

    This can also be done with class decorators:

    models = {}
    def model(cls):
        models[cls.__name__] = cls
        return cls
    class A(object):

    Or with an explicit registration function:

    models = {}
    def register_model(cls):
        models[cls.__name__] = cls
    class A(object):

    Actually, this is pretty much the same: you mention class decorators unfavorably, but it"s really nothing more than syntactic sugar for a function invocation on a class, so there"s no magic about it.

    Anyway, the advantage of metaclasses in this case is inheritance, as they work for any subclasses, whereas the other solutions only work for subclasses explicitly decorated or registered.

    >>> class B(A):
    ...     pass
    >>> models
    {"A": <__main__.A class at 0x...> # No B :(
  2. Refactoring (modifying class attributes or adding new ones):

    class ModelMetaclass(type):
        def __new__(meta, name, bases, attrs):
            fields = {}
            for key, value in attrs.items():
                if isinstance(value, Field):
           = "%s.%s" % (name, key)
                    fields[key] = value
            for base in bases:
                if hasattr(base, "_fields"):
            attrs["_fields"] = fields
            return type.__new__(meta, name, bases, attrs)
    class Model(object):
        __metaclass__ = ModelMetaclass

    Whenever you subclass Model and define some Field attributes, they are injected with their names (for more informative error messages, for example), and grouped into a _fields dictionary (for easy iteration, without having to look through all the class attributes and all its base classes" attributes every time):

    >>> class A(Model):
    ...     foo = Integer()
    >>> class B(A):
    ...     bar = String()
    >>> B._fields
    {"foo": Integer(""), "bar": String("")}

    Again, this can be done (without inheritance) with a class decorator:

    def model(cls):
        fields = {}
        for key, value in vars(cls).items():
            if isinstance(value, Field):
       = "%s.%s" % (cls.__name__, key)
                fields[key] = value
        for base in cls.__bases__:
            if hasattr(base, "_fields"):
        cls._fields = fields
        return cls
    class A(object):
        foo = Integer()
    class B(A):
        bar = String()
    # has no name :(
    # B._fields is {"foo": Integer("")} :(

    Or explicitly:

    class A(object):
        foo = Integer("")
        _fields = {"foo": foo} # Don"t forget all the base classes" fields, too!

    Although, on the contrary to your advocacy for readable and maintainable non-meta programming, this is much more cumbersome, redundant and error prone:

    class B(A):
        bar = String()
    # vs.
    class B(A):
        bar = String("bar")
        _fields = {"": bar, "":}

Having considered the most common and concrete use cases, the only cases where you absolutely HAVE to use metaclasses are when you want to modify the class name or list of base classes, because once defined, these parameters are baked into the class, and no decorator or function can unbake them.

class Metaclass(type):
    def __new__(meta, name, bases, attrs):
        return type.__new__(meta, "foo", (int,), attrs)

class Baseclass(object):
    __metaclass__ = Metaclass

class A(Baseclass):

class B(A):

print A.__name__ # foo
print B.__name__ # foo
print issubclass(B, A)   # False
print issubclass(B, int) # True

This may be useful in frameworks for issuing warnings whenever classes with similar names or incomplete inheritance trees are defined, but I can"t think of a reason beside trolling to actually change these values. Maybe David Beazley can.

Anyway, in Python 3, metaclasses also have the __prepare__ method, which lets you evaluate the class body into a mapping other than a dict, thus supporting ordered attributes, overloaded attributes, and other wicked cool stuff:

import collections

class Metaclass(type):

    def __prepare__(meta, name, bases, **kwds):
        return collections.OrderedDict()

    def __new__(meta, name, bases, attrs, **kwds):
        # Do more stuff...

class A(metaclass=Metaclass):
    x = 1
    y = 2

# prints ["x", "y"] rather than ["y", "x"]


class ListDict(dict):
    def __setitem__(self, key, value):
        self.setdefault(key, []).append(value)

class Metaclass(type):

    def __prepare__(meta, name, bases, **kwds):
        return ListDict()

    def __new__(meta, name, bases, attrs, **kwds):
        # Do more stuff...

class A(metaclass=Metaclass):

    def foo(self):

    def foo(self, x):

# prints [<function foo at 0x...>, <function foo at 0x...>] rather than <function foo at 0x...>

You might argue ordered attributes can be achieved with creation counters, and overloading can be simulated with default arguments:

import itertools

class Attribute(object):
    _counter = itertools.count()
    def __init__(self):
        self._count =

class A(object):
    x = Attribute()
    y = Attribute()

A._order = sorted([(k, v) for k, v in vars(A).items() if isinstance(v, Attribute)],
                  key = lambda (k, v): v._count)


class A(object):

    def _foo0(self):

    def _foo1(self, x):

    def foo(self, x=None):
        if x is None:
            return self._foo0()
            return self._foo1(x)

Besides being much more ugly, it"s also less flexible: what if you want ordered literal attributes, like integers and strings? What if None is a valid value for x?

Here"s a creative way to solve the first problem:

import sys

class Builder(object):
    def __call__(self, cls):
        cls._order = self.frame.f_code.co_names
        return cls

def ordered():
    builder = Builder()
    def trace(frame, event, arg):
        builder.frame = frame
    return builder

class A(object):
    x = 1
    y = "foo"

print A._order # ["x", "y"]

And here"s a creative way to solve the second one:

_undefined = object()

class A(object):

    def _foo0(self):

    def _foo1(self, x):

    def foo(self, x=_undefined):
        if x is _undefined:
            return self._foo0()
            return self._foo1(x)

But this is much, MUCH voodoo-er than a simple metaclass (especially the first one, which really melts your brain). My point is, you look at metaclasses as unfamiliar and counter-intuitive, but you can also look at them as the next step of evolution in programming languages: you just have to adjust your mindset. After all, you could probably do everything in C, including defining a struct with function pointers and passing it as the first argument to its functions. A person seeing C++ for the first time might say, "what is this magic? Why is the compiler implicitly passing this to methods, but not to regular and static functions? It"s better to be explicit and verbose about your arguments". But then, object-oriented programming is much more powerful once you get it; and so is this, uh... quasi-aspect-oriented programming, I guess. And once you understand metaclasses, they"re actually very simple, so why not use them when convenient?

And finally, metaclasses are rad, and programming should be fun. Using standard programming constructs and design patterns all the time is boring and uninspiring, and hinders your imagination. Live a little! Here"s a metametaclass, just for you.

class MetaMetaclass(type):
    def __new__(meta, name, bases, attrs):
        def __new__(meta, name, bases, attrs):
            cls = type.__new__(meta, name, bases, attrs)
            cls._label = "Made in %s" % meta.__name__
            return cls 
        attrs["__new__"] = __new__
        return type.__new__(meta, name, bases, attrs)

class China(type):
    __metaclass__ = MetaMetaclass

class Taiwan(type):
    __metaclass__ = MetaMetaclass

class A(object):
    __metaclass__ = China

class B(object):
    __metaclass__ = Taiwan

print A._label # Made in China
print B._label # Made in Taiwan


This is a pretty old question, but it"s still getting upvotes, so I thought I"d add a link to a more comprehensive answer. If you"d like to read more about metaclasses and their uses, I"ve just published an article about it here.

Answer #9

There are a few approaches you could use here.

Duck typing

Since Python is duck typed, you could simply do as follows (which seems to be the way usually suggested):

    data = data.decode()
except (UnicodeDecodeError, AttributeError):

You could use hasattr as you describe, however, and it"d probably be fine. This is, of course, assuming the .decode() method for the given object returns a string, and has no nasty side effects.

I personally recommend either the exception or hasattr method, but whatever you use is up to you.

Use str()

This approach is uncommon, but is possible:

data = str(data, "utf-8")

Other encodings are permissible, just like with the buffer protocol"s .decode(). You can also pass a third parameter to specify error handling.

Single-dispatch generic functions (Python 3.4+)

Python 3.4 and above include a nifty feature called single-dispatch generic functions, via functools.singledispatch. This is a bit more verbose, but it"s also more explicit:

def func(data):
    # This is the generic implementation
    data = data.decode()

def _(data):
    # data will already be a string

You could also make special handlers for bytearray and bytes objects if you so chose.

Beware: single-dispatch functions only work on the first argument! This is an intentional feature, see PEP 433.

Answer #10

This is an improvement of the accepted answer by Carl Meyer. It works with virtualenv for Python 3 and 2 and also for the venv module in Python 3:

import sys

def is_venv():
    return (hasattr(sys, "real_prefix") or
            (hasattr(sys, "base_prefix") and sys.base_prefix != sys.prefix))

The check for sys.real_prefix covers virtualenv, the equality of non-empty sys.base_prefix with sys.prefix covers venv.

Consider a script that uses the function like this:

if is_venv():
    print("inside virtualenv or venv")
    print("outside virtualenv or venv")

And the following invocation:

$ python2 
outside virtualenv or venv

$ python3 
outside virtualenv or venv

$ python2 -m virtualenv virtualenv2
$ . virtualenv2/bin/activate
(virtualenv2) $ python 
inside virtualenv or venv
(virtualenv2) $ deactivate

$ python3 -m virtualenv virtualenv3
$ . virtualenv3/bin/activate
(virtualenv3) $ python 
inside virtualenv or venv
(virtualenv3) $ deactivate 

$ python3 -m venv venv3
$ . venv3/bin/activate
(venv3) $ python 
inside virtualenv or venv
(venv3) $ deactivate