Python | os.supports_follow_symlinks object

Python Methods and Functions | symlink

Several methods in the OS module allow their follow_symlinks parameter to be used. Because different platforms provide different functionality, the follow_symlinks option may be supported on one platform but not supported on another. os.supports_follow_symlinks in Python — it is an installed object that specifies which methods in the OS module allow their parameter to use follow_symlinks

Allows whether a particular method can be used their parameter follow_symlinks or not can be checked with the in operator at os.supports_follow_symlinks
For example:
The expression below checks if os.stat () method use their parameter follow_symlinks or not when called on the local platform.

 os.stat in os.supports_follow_symlinks 

Syntax: os.supports_follow_symlinks

Parameter: This is a non callable set object. Hence, no parameter is required.

Return Type: This method returns a set object which represents the methods in the OS module , which permits the use of their follow_symlinks parameter.

Code # 1: Using the os.supports_follow_symlinks object

# Python program to explain the os.supports_follow_symlinks object

 
# import of the os module

import os

 

 
# Get a list of all methods
# allowing them to be used
# follow_symlinks parameter

methodList = os.supports_follow_symlinks

 
# Print list

print (methodList)

Output:

 {& lt; built-in function stat & gt ;, & lt; built-in function chown & gt ;, & lt; built-in function link & gt ;, & lt; built-in function access & gt ;, & lt; built-in function utime & gt ;} 

Code # 2: Using the os.supports_follow_symlinks object to check if a particular method allows their follow_symlinks option or not.

# Python program to explain the os.supports_follow_symlinks object

 
# import the os module

import os

 

 
# Check if there is whether the os.stat () method
# allows them to be used
# follow_symlinks parameter or not

support = os.stat in os.supports_follow_symlinks

 
# Print result

print (support)

 

 
# Check if there is an os.fwalk () method
# enable the use of Using them
# follow_symlinks parameter or not

support = os.fwalk in os.supports_follow_symlinks

 
# Print result

print (support)

Exit :

 True False 




Python | os.supports_follow_symlinks object: StackOverflow Questions

Check if file is symlink in python

In python, is there a function to check if a given file/directory is a symlink ? For example, for the below files, my wrapper function should return True.

# ls -l
total 0
lrwxrwxrwx 1 root root 8 2012-06-16 18:58 dir -> ../temp/
lrwxrwxrwx 1 root root 6 2012-06-16 18:55 link -> ../log

Answer #1

It helps to install a python package foo on your machine (can also be in virtualenv) so that you can import the package foo from other projects and also from [I]Python prompts.

It does the similar job of pip, easy_install etc.,


Using setup.py

Let"s start with some definitions:

Package - A folder/directory that contains __init__.py file.
Module - A valid python file with .py extension.
Distribution - How one package relates to other packages and modules.

Let"s say you want to install a package named foo. Then you do,

$ git clone https://github.com/user/foo  
$ cd foo
$ python setup.py install

Instead, if you don"t want to actually install it but still would like to use it. Then do,

$ python setup.py develop  

This command will create symlinks to the source directory within site-packages instead of copying things. Because of this, it is quite fast (particularly for large packages).


Creating setup.py

If you have your package tree like,

foo
├── foo
│   ├── data_struct.py
│   ├── __init__.py
│   └── internals.py
├── README
├── requirements.txt
└── setup.py

Then, you do the following in your setup.py script so that it can be installed on some machine:

from setuptools import setup

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   author="Man Foo",
   author_email="[email protected]",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
)

Instead, if your package tree is more complex like the one below:

foo
├── foo
│   ├── data_struct.py
│   ├── __init__.py
│   └── internals.py
├── README
├── requirements.txt
├── scripts
│   ├── cool
│   └── skype
└── setup.py

Then, your setup.py in this case would be like:

from setuptools import setup

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   author="Man Foo",
   author_email="[email protected]",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
   scripts=[
            "scripts/cool",
            "scripts/skype",
           ]
)

Add more stuff to (setup.py) & make it decent:

from setuptools import setup

with open("README", "r") as f:
    long_description = f.read()

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   license="MIT",
   long_description=long_description,
   author="Man Foo",
   author_email="[email protected]",
   url="http://www.foopackage.com/",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
   scripts=[
            "scripts/cool",
            "scripts/skype",
           ]
)

The long_description is used in pypi.org as the README description of your package.


And finally, you"re now ready to upload your package to PyPi.org so that others can install your package using pip install yourpackage.

At this point there are two options.

  • publish in the temporary test.pypi.org server to make oneself familiarize with the procedure, and then publish it on the permanent pypi.org server for the public to use your package.
  • publish straight away on the permanent pypi.org server, if you are already familiar with the procedure and have your user credentials (e.g., username, password, package name)

Once your package name is registered in pypi.org, nobody can claim or use it. Python packaging suggests the twine package for uploading purposes (of your package to PyPi). Thus,

(1) the first step is to locally build the distributions using:

# prereq: wheel (pip install wheel)  
$ python setup.py sdist bdist_wheel   

(2) then using twine for uploading either to test.pypi.org or pypi.org:

$ twine upload --repository testpypi dist/*  
username: ***  
password: ***  

It will take few minutes for the package to appear on test.pypi.org. Once you"re satisfied with it, you can then upload your package to the real & permanent index of pypi.org simply with:

$ twine upload dist/*  

Optionally, you can also sign the files in your package with a GPG by:

$ twine upload dist/* --sign 

Bonus Reading:

Answer #2

This depends on how you installed TensorFlow. I am going to use the same headings used by TensorFlow"s installation instructions to structure this answer.


Pip installation

Run:

python -c "import tensorflow as tf; print(tf.__version__)"  # for Python 2
python3 -c "import tensorflow as tf; print(tf.__version__)"  # for Python 3

Note that python is symlinked to /usr/bin/python3 in some Linux distributions, so use python instead of python3 in these cases.

pip list | grep tensorflow for Python 2 or pip3 list | grep tensorflow for Python 3 will also show the version of Tensorflow installed.


Virtualenv installation

Run:

python -c "import tensorflow as tf; print(tf.__version__)"  # for both Python 2 and Python 3

pip list | grep tensorflow will also show the version of Tensorflow installed.

For example, I have installed TensorFlow 0.9.0 in a virtualenv for Python 3. So, I get:

$ python -c "import tensorflow as tf; print(tf.__version__)"
0.9.0

$ pip list | grep tensorflow
tensorflow (0.9.0)

Answer #3

Although almost every possible way has been listed in (at least one of) the existing answers (e.g. Python 3.4 specific stuff was added), I"ll try to group everything together.

Note: every piece of Python standard library code that I"m going to post, belongs to version 3.5.3.

Problem statement:

  1. Check file (arguable: also folder ("special" file) ?) existence
  2. Don"t use try / except / else / finally blocks

Possible solutions:

  1. [Python 3]: os.path.exists(path) (also check other function family members like os.path.isfile, os.path.isdir, os.path.lexists for slightly different behaviors)

    os.path.exists(path)
    

    Return True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.

    All good, but if following the import tree:

    • os.path - posixpath.py (ntpath.py)

      • genericpath.py, line ~#20+

        def exists(path):
            """Test whether a path exists.  Returns False for broken symbolic links"""
            try:
                st = os.stat(path)
            except os.error:
                return False
            return True
        

    it"s just a try / except block around [Python 3]: os.stat(path, *, dir_fd=None, follow_symlinks=True). So, your code is try / except free, but lower in the framestack there"s (at least) one such block. This also applies to other funcs (including os.path.isfile).

    1.1. [Python 3]: Path.is_file()

    • It"s a fancier (and more pythonic) way of handling paths, but
    • Under the hood, it does exactly the same thing (pathlib.py, line ~#1330):

      def is_file(self):
          """
          Whether this path is a regular file (also True for symlinks pointing
          to regular files).
          """
          try:
              return S_ISREG(self.stat().st_mode)
          except OSError as e:
              if e.errno not in (ENOENT, ENOTDIR):
                  raise
              # Path doesn"t exist or is a broken symlink
              # (see https://bitbucket.org/pitrou/pathlib/issue/12/)
              return False
      
  2. [Python 3]: With Statement Context Managers. Either:

    • Create one:

      class Swallow:  # Dummy example
          swallowed_exceptions = (FileNotFoundError,)
      
          def __enter__(self):
              print("Entering...")
      
          def __exit__(self, exc_type, exc_value, exc_traceback):
              print("Exiting:", exc_type, exc_value, exc_traceback)
              return exc_type in Swallow.swallowed_exceptions  # only swallow FileNotFoundError (not e.g. TypeError - if the user passes a wrong argument like None or float or ...)
      
      • And its usage - I"ll replicate the os.path.isfile behavior (note that this is just for demonstrating purposes, do not attempt to write such code for production):

        import os
        import stat
        
        
        def isfile_seaman(path):  # Dummy func
            result = False
            with Swallow():
                result = stat.S_ISREG(os.stat(path).st_mode)
            return result
        
    • Use [Python 3]: contextlib.suppress(*exceptions) - which was specifically designed for selectively suppressing exceptions


    But, they seem to be wrappers over try / except / else / finally blocks, as [Python 3]: The with statement states:

    This allows common try...except...finally usage patterns to be encapsulated for convenient reuse.

  3. Filesystem traversal functions (and search the results for matching item(s))


    Since these iterate over folders, (in most of the cases) they are inefficient for our problem (there are exceptions, like non wildcarded globbing - as @ShadowRanger pointed out), so I"m not going to insist on them. Not to mention that in some cases, filename processing might be required.

  4. [Python 3]: os.access(path, mode, *, dir_fd=None, effective_ids=False, follow_symlinks=True) whose behavior is close to os.path.exists (actually it"s wider, mainly because of the 2nd argument)

    • user permissions might restrict the file "visibility" as the doc states:

      ...test if the invoking user has the specified access to path. mode should be F_OK to test the existence of path...

    os.access("/tmp", os.F_OK)

    Since I also work in C, I use this method as well because under the hood, it calls native APIs (again, via "${PYTHON_SRC_DIR}/Modules/posixmodule.c"), but it also opens a gate for possible user errors, and it"s not as Pythonic as other variants. So, as @AaronHall rightly pointed out, don"t use it unless you know what you"re doing:

    Note: calling native APIs is also possible via [Python 3]: ctypes - A foreign function library for Python, but in most cases it"s more complicated.

    (Win specific): Since vcruntime* (msvcr*) .dll exports a [MS.Docs]: _access, _waccess function family as well, here"s an example:

    Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import os, ctypes
    >>> ctypes.CDLL("msvcrt")._waccess(u"C:\Windows\System32\cmd.exe", os.F_OK)
    0
    >>> ctypes.CDLL("msvcrt")._waccess(u"C:\Windows\System32\cmd.exe.notexist", os.F_OK)
    -1
    

    Notes:

    • Although it"s not a good practice, I"m using os.F_OK in the call, but that"s just for clarity (its value is 0)
    • I"m using _waccess so that the same code works on Python3 and Python2 (in spite of unicode related differences between them)
    • Although this targets a very specific area, it was not mentioned in any of the previous answers


    The Lnx (Ubtu (16 x64)) counterpart as well:

    Python 3.5.2 (default, Nov 17 2016, 17:05:23)
    [GCC 5.4.0 20160609] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import os, ctypes
    >>> ctypes.CDLL("/lib/x86_64-linux-gnu/libc.so.6").access(b"/tmp", os.F_OK)
    0
    >>> ctypes.CDLL("/lib/x86_64-linux-gnu/libc.so.6").access(b"/tmp.notexist", os.F_OK)
    -1
    

    Notes:

    • Instead hardcoding libc"s path ("/lib/x86_64-linux-gnu/libc.so.6") which may (and most likely, will) vary across systems, None (or the empty string) can be passed to CDLL constructor (ctypes.CDLL(None).access(b"/tmp", os.F_OK)). According to [man7]: DLOPEN(3):

      If filename is NULL, then the returned handle is for the main program. When given to dlsym(), this handle causes a search for a symbol in the main program, followed by all shared objects loaded at program startup, and then all shared objects loaded by dlopen() with the flag RTLD_GLOBAL.

      • Main (current) program (python) is linked against libc, so its symbols (including access) will be loaded
      • This has to be handled with care, since functions like main, Py_Main and (all the) others are available; calling them could have disastrous effects (on the current program)
      • This doesn"t also apply to Win (but that"s not such a big deal, since msvcrt.dll is located in "%SystemRoot%System32" which is in %PATH% by default). I wanted to take things further and replicate this behavior on Win (and submit a patch), but as it turns out, [MS.Docs]: GetProcAddress function only "sees" exported symbols, so unless someone declares the functions in the main executable as __declspec(dllexport) (why on Earth the regular person would do that?), the main program is loadable but pretty much unusable
  5. Install some third-party module with filesystem capabilities

    Most likely, will rely on one of the ways above (maybe with slight customizations).
    One example would be (again, Win specific) [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs.

    But, since this is more like a workaround, I"m stopping here.

  6. Another (lame) workaround (gainarie) is (as I like to call it,) the sysadmin approach: use Python as a wrapper to execute shell commands

    • Win:

      (py35x64_test) e:WorkDevStackOverflowq000082831>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os; print(os.system("dir /b "C:\Windows\System32\cmd.exe" > nul 2>&1"))"
      0
      
      (py35x64_test) e:WorkDevStackOverflowq000082831>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os; print(os.system("dir /b "C:\Windows\System32\cmd.exe.notexist" > nul 2>&1"))"
      1
      
    • Nix (Lnx (Ubtu)):

      [[email protected]:~]> python3 -c "import os; print(os.system("ls "/tmp" > /dev/null 2>&1"))"
      0
      [[email protected]:~]> python3 -c "import os; print(os.system("ls "/tmp.notexist" > /dev/null 2>&1"))"
      512
      

Bottom line:

  • Do use try / except / else / finally blocks, because they can prevent you running into a series of nasty problems. A counter-example that I can think of, is performance: such blocks are costly, so try not to place them in code that it"s supposed to run hundreds of thousands times per second (but since (in most cases) it involves disk access, it won"t be the case).

Final note(s):

  • I will try to keep it up to date, any suggestions are welcome, I will incorporate anything useful that will come up into the answer

Answer #4

I had used home-brew to install 2.7 on OS X 10.10 and the new install was missing the sym links. I ran

brew link --overwrite python

as mentioned in How to symlink python in Homebrew? and it solved the problem.

Answer #5

In Python, you can copy the files using


import os
import shutil
import subprocess

1) Copying files using shutil module

shutil.copyfile signature

shutil.copyfile(src_file, dest_file, *, follow_symlinks=True)

# example    
shutil.copyfile("source.txt", "destination.txt")

shutil.copy signature

shutil.copy(src_file, dest_file, *, follow_symlinks=True)

# example
shutil.copy("source.txt", "destination.txt")

shutil.copy2 signature

shutil.copy2(src_file, dest_file, *, follow_symlinks=True)

# example
shutil.copy2("source.txt", "destination.txt")  

shutil.copyfileobj signature

shutil.copyfileobj(src_file_object, dest_file_object[, length])

# example
file_src = "source.txt"  
f_src = open(file_src, "rb")

file_dest = "destination.txt"  
f_dest = open(file_dest, "wb")

shutil.copyfileobj(f_src, f_dest)  

2) Copying files using os module

os.popen signature

os.popen(cmd[, mode[, bufsize]])

# example
# In Unix/Linux
os.popen("cp source.txt destination.txt") 

# In Windows
os.popen("copy source.txt destination.txt")

os.system signature

os.system(command)


# In Linux/Unix
os.system("cp source.txt destination.txt")  

# In Windows
os.system("copy source.txt destination.txt")

3) Copying files using subprocess module

subprocess.call signature

subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False)

# example (WARNING: setting `shell=True` might be a security-risk)
# In Linux/Unix
status = subprocess.call("cp source.txt destination.txt", shell=True) 

# In Windows
status = subprocess.call("copy source.txt destination.txt", shell=True)

subprocess.check_output signature

subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False)

# example (WARNING: setting `shell=True` might be a security-risk)
# In Linux/Unix
status = subprocess.check_output("cp source.txt destination.txt", shell=True)

# In Windows
status = subprocess.check_output("copy source.txt destination.txt", shell=True)

Answer #6

For your stated scenario, there is no reason to combine realpath and abspath, since os.path.realpath actually calls os.path.abspath before returning a result (I checked Python 2.5 to Python 3.6).

  • os.path.abspath returns the absolute path, but does NOT resolve symlinks in its argument.
  • os.path.realpath will first resolve any symbolic links in the path, and then return the absolute path.

However, if you expect your path to contain a ~, neither abspath or realpath will resolve ~ to the user"s home directory, and the resulting path will be invalid. You will need to use os.path.expanduser to resolve this to the user"s directory.

For the sake of a thorough explanation, here are some results which I"ve verified in Windows and Linux, in Python 3.4 and Python 2.6. The current directory (./) is my home directory, which looks like this:

myhome
|- data (symlink to /mnt/data)
|- subdir (extra directory, for verbose explanation)
# os.path.abspath returns the absolute path, but does NOT resolve symlinks in its argument
os.path.abspath("./")
"/home/myhome"
os.path.abspath("./subdir/../data")
"/home/myhome/data"


# os.path.realpath will resolve symlinks AND return an absolute path from a relative path
os.path.realpath("./")
"/home/myhome"
os.path.realpath("./subdir/../")
"/home/myhome"
os.path.realpath("./subdir/../data")
"/mnt/data"

# NEITHER abspath or realpath will resolve or remove ~.
os.path.abspath("~/data")
"/home/myhome/~/data"

os.path.realpath("~/data")
"/home/myhome/~/data"

# And the returned path will be invalid
os.path.exists(os.path.abspath("~/data"))
False
os.path.exists(os.path.realpath("~/data"))
False

# Use realpath + expanduser to resolve ~
os.path.realpath(os.path.expanduser("~/subdir/../data"))
"/mnt/data"

Answer #7

How do I check whether a file exists, using Python, without using a try statement?

Now available since Python 3.4, import and instantiate a Path object with the file name, and check the is_file method (note that this returns True for symlinks pointing to regular files as well):

>>> from pathlib import Path
>>> Path("/").is_file()
False
>>> Path("/initrd.img").is_file()
True
>>> Path("/doesnotexist").is_file()
False

If you"re on Python 2, you can backport the pathlib module from pypi, pathlib2, or otherwise check isfile from the os.path module:

>>> import os
>>> os.path.isfile("/")
False
>>> os.path.isfile("/initrd.img")
True
>>> os.path.isfile("/doesnotexist")
False

Now the above is probably the best pragmatic direct answer here, but there"s the possibility of a race condition (depending on what you"re trying to accomplish), and the fact that the underlying implementation uses a try, but Python uses try everywhere in its implementation.

Because Python uses try everywhere, there"s really no reason to avoid an implementation that uses it.

But the rest of this answer attempts to consider these caveats.

Longer, much more pedantic answer

Available since Python 3.4, use the new Path object in pathlib. Note that .exists is not quite right, because directories are not files (except in the unix sense that everything is a file).

>>> from pathlib import Path
>>> root = Path("/")
>>> root.exists()
True

So we need to use is_file:

>>> root.is_file()
False

Here"s the help on is_file:

is_file(self)
    Whether this path is a regular file (also True for symlinks pointing
    to regular files).

So let"s get a file that we know is a file:

>>> import tempfile
>>> file = tempfile.NamedTemporaryFile()
>>> filepathobj = Path(file.name)
>>> filepathobj.is_file()
True
>>> filepathobj.exists()
True

By default, NamedTemporaryFile deletes the file when closed (and will automatically close when no more references exist to it).

>>> del file
>>> filepathobj.exists()
False
>>> filepathobj.is_file()
False

If you dig into the implementation, though, you"ll see that is_file uses try:

def is_file(self):
    """
    Whether this path is a regular file (also True for symlinks pointing
    to regular files).
    """
    try:
        return S_ISREG(self.stat().st_mode)
    except OSError as e:
        if e.errno not in (ENOENT, ENOTDIR):
            raise
        # Path doesn"t exist or is a broken symlink
        # (see https://bitbucket.org/pitrou/pathlib/issue/12/)
        return False

Race Conditions: Why we like try

We like try because it avoids race conditions. With try, you simply attempt to read your file, expecting it to be there, and if not, you catch the exception and perform whatever fallback behavior makes sense.

If you want to check that a file exists before you attempt to read it, and you might be deleting it and then you might be using multiple threads or processes, or another program knows about that file and could delete it - you risk the chance of a race condition if you check it exists, because you are then racing to open it before its condition (its existence) changes.

Race conditions are very hard to debug because there"s a very small window in which they can cause your program to fail.

But if this is your motivation, you can get the value of a try statement by using the suppress context manager.

Avoiding race conditions without a try statement: suppress

Python 3.4 gives us the suppress context manager (previously the ignore context manager), which does semantically exactly the same thing in fewer lines, while also (at least superficially) meeting the original ask to avoid a try statement:

from contextlib import suppress
from pathlib import Path

Usage:

>>> with suppress(OSError), Path("doesnotexist").open() as f:
...     for line in f:
...         print(line)
... 
>>>
>>> with suppress(OSError):
...     Path("doesnotexist").unlink()
... 
>>> 

For earlier Pythons, you could roll your own suppress, but without a try will be more verbose than with. I do believe this actually is the only answer that doesn"t use try at any level in the Python that can be applied to prior to Python 3.4 because it uses a context manager instead:

class suppress(object):
    def __init__(self, *exceptions):
        self.exceptions = exceptions
    def __enter__(self):
        return self
    def __exit__(self, exc_type, exc_value, traceback):
        if exc_type is not None:
            return issubclass(exc_type, self.exceptions)

Perhaps easier with a try:

from contextlib import contextmanager

@contextmanager
def suppress(*exceptions):
    try:
        yield
    except exceptions:
        pass

Other options that don"t meet the ask for "without try":

isfile

import os
os.path.isfile(path)

from the docs:

os.path.isfile(path)

Return True if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path.

But if you examine the source of this function, you"ll see it actually does use a try statement:

# This follows symbolic links, so both islink() and isdir() can be true
# for the same path on systems that support symlinks
def isfile(path):
    """Test whether a path is a regular file"""
    try:
        st = os.stat(path)
    except os.error:
        return False
    return stat.S_ISREG(st.st_mode)
>>> OSError is os.error
True

All it"s doing is using the given path to see if it can get stats on it, catching OSError and then checking if it"s a file if it didn"t raise the exception.

If you intend to do something with the file, I would suggest directly attempting it with a try-except to avoid a race condition:

try:
    with open(path) as f:
        f.read()
except OSError:
    pass

os.access

Available for Unix and Windows is os.access, but to use you must pass flags, and it does not differentiate between files and directories. This is more used to test if the real invoking user has access in an elevated privilege environment:

import os
os.access(path, os.F_OK)

It also suffers from the same race condition problems as isfile. From the docs:

Note: Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. It’s preferable to use EAFP techniques. For example:

if os.access("myfile", os.R_OK):
    with open("myfile") as fp:
        return fp.read()
return "some default data"

is better written as:

try:
    fp = open("myfile")
except IOError as e:
    if e.errno == errno.EACCES:
        return "some default data"
    # Not a permission error.
    raise
else:
    with fp:
        return fp.read()

Avoid using os.access. It is a low level function that has more opportunities for user error than the higher level objects and functions discussed above.

Criticism of another answer:

Another answer says this about os.access:

Personally, I prefer this one because under the hood, it calls native APIs (via "${PYTHON_SRC_DIR}/Modules/posixmodule.c"), but it also opens a gate for possible user errors, and it"s not as Pythonic as other variants:

This answer says it prefers a non-Pythonic, error-prone method, with no justification. It seems to encourage users to use low-level APIs without understanding them.

It also creates a context manager which, by unconditionally returning True, allows all Exceptions (including KeyboardInterrupt and SystemExit!) to pass silently, which is a good way to hide bugs.

This seems to encourage users to adopt poor practices.

Answer #8

The short answer is that requirements.txt is for listing package requirements only. setup.py on the other hand is more like an installation script. If you don"t plan on installing the python code, typically you would only need requirements.txt.

The file setup.py describes, in addition to the package dependencies, the set of files and modules that should be packaged (or compiled, in the case of native modules (i.e., written in C)), and metadata to add to the python package listings (e.g. package name, package version, package description, author, ...).

Because both files list dependencies, this can lead to a bit of duplication. Read below for details.

requirements.txt


This file lists python package requirements. It is a plain text file (optionally with comments) that lists the package dependencies of your python project (one per line). It does not describe the way in which your python package is installed. You would generally consume the requirements file with pip install -r requirements.txt.

The filename of the text file is arbitrary, but is often requirements.txt by convention. When exploring source code repositories of other python packages, you might stumble on other names, such as dev-dependencies.txt or dependencies-dev.txt. Those serve the same purpose as dependencies.txt but generally list additional dependencies of interest to developers of the particular package, namely for testing the source code (e.g. pytest, pylint, etc.) before release. Users of the package generally wouldn"t need the entire set of developer dependencies to run the package.

If multiplerequirements-X.txt variants are present, then usually one will list runtime dependencies, and the other build-time, or test dependencies. Some projects also cascade their requirements file, i.e. when one requirements file includes another file (example). Doing so can reduce repetition.

setup.py


This is a python script which uses the setuptools module to define a python package (name, files included, package metadata, and installation). It will, like requirements.txt, also list runtime dependencies of the package. Setuptools is the de-facto way to build and install python packages, but it has its shortcomings, which over time have sprouted the development of new "meta-package managers", like pip. Example shortcomings of setuptools are its inability to install multiple versions of the same package, and lack of an uninstall command.

When a python user does pip install ./pkgdir_my_module (or pip install my-module), pip will run setup.py in the given directory (or module). Similarly, any module which has a setup.py can be pip-installed, e.g. by running pip install . from the same folder.

Do I really need both?


Short answer is no, but it"s nice to have both. They achieve different purposes, but they can both be used to list your dependencies.

There is one trick you may consider to avoid duplicating your list of dependencies between requirements.txt and setup.py. If you have written a fully working setup.py for your package already, and your dependencies are mostly external, you could consider having a simple requirements.txt with only the following:

 # requirements.txt
 #
 # installs dependencies from ./setup.py, and the package itself,
 # in editable mode
 -e .

 # (the -e above is optional). you could also just install the package
 # normally with just the line below (after uncommenting)
 # .

The -e is a special pip install option which installs the given package in editable mode. When pip -r requirements.txt is run on this file, pip will install your dependencies via the list in ./setup.py. The editable option will place a symlink in your install directory (instead of an egg or archived copy). It allows developers to edit code in place from the repository without reinstalling.

You can also take advantage of what"s called "setuptools extras" when you have both files in your package repository. You can define optional packages in setup.py under a custom category, and install those packages from just that category with pip:

# setup.py
from setuptools import setup
setup(
   name="FOO"
   ...
   extras_require = {
       "dev": ["pylint"],
       "build": ["requests"]
   }
   ...
)

and then, in the requirements file:

# install packages in the [build] category, from setup.py
# (path/to/mypkg is the directory where setup.py is)
-e path/to/mypkg[build]

This would keep all your dependency lists inside setup.py.

Note: You would normally execute pip and setup.py from a sandbox, such as those created with the program virtualenv. This will avoid installing python packages outside the context of your project"s development environment.

Answer #9

os.path.realpath derefences symbolic links on those operating systems which support them.

os.path.abspath simply removes things like . and .. from the path giving a full path from the root of the directory tree to the named file (or symlink)

For example, on Ubuntu

$ ls -l
total 0
-rw-rw-r-- 1 guest guest 0 Jun 16 08:36 a
lrwxrwxrwx 1 guest guest 1 Jun 16 08:36 b -> a

$ python
Python 2.7.11 (default, Dec 15 2015, 16:46:19) 
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> from os.path import abspath, realpath

>>> abspath("b")
"/home/guest/play/paths/b"

>>> realpath("b")
"/home/guest/play/paths/a"

Symlinks can contain relative paths, hence the need to use both. The inner call to realpath might return a path with embedded .. parts, which abspath then removes.

Answer #10

As for Django 1.8 being the current release, there is no need to symlink, copy the admin/templates to your project folder, or install middlewares as suggested by the answers above. Here is what to do:

  1. create the following tree structure(recommended by the official documentation)

    your_project
         |-- your_project/
         |-- myapp/
         |-- templates/
              |-- admin/
                  |-- myapp/
                      |-- change_form.html  <- do not misspell this
    

Note: The location of this file is not important. You can put it inside your app and it will still work. As long as its location can be discovered by django. What"s more important is the name of the HTML file has to be the same as the original HTML file name provided by django.

  1. Add this template path to your settings.py:

    TEMPLATES = [
        {
            "BACKEND": "django.template.backends.django.DjangoTemplates",
            "DIRS": [os.path.join(BASE_DIR, "templates")], # <- add this line
            "APP_DIRS": True,
            "OPTIONS": {
                "context_processors": [
                    "django.template.context_processors.debug",
                    "django.template.context_processors.request",
                    "django.contrib.auth.context_processors.auth",
                    "django.contrib.messages.context_processors.messages",
                ],
            },
        },
    ]
    
  2. Identify the name and block you want to override. This is done by looking into django"s admin/templates directory. I am using virtualenv, so for me, the path is here:

    ~/.virtualenvs/edge/lib/python2.7/site-packages/django/contrib/admin/templates/admin
    

In this example, I want to modify the add new user form. The template responsiblve for this view is change_form.html. Open up the change_form.html and find the {% block %} that you want to extend.

  1. In your change_form.html, write somethings like this:

    {% extends "admin/change_form.html" %}
    {% block field_sets %}
         {# your modification here #}
    {% endblock %}
    
  2. Load up your page and you should see the changes

Get Solution for free from DataCamp guru