numpy.ones () in Python

NumPy | ones | Python Methods and Functions

numpy.ones (shape, dtype = None, order = & # 39; C & # 39;): return a new array of the given shape and type with ones. 
Parameters :

  shape:  integer or sequence of integers  order:  C_contiguous or F_contiguous C-contiguous order in memory (last index varies the fastest) C order means that operating row-rise on the array will be slightly quicker FORTRAN-contiguous order in memory (first index varies the fastest). F order means that column-wise operations will be faster.  dtype:  [optional, float (byDeafult)] Data type of returned array. 

Returns :

 ndarray of ones having given shape, order and datatype. 

# Python program illustrating
# numpy.ones method

  

import numpy as geek

 

b = geek. ones ( 2 , dtype = int )

print ( "Matrix b:" , b)

 

a = geek.ones ([ 2 , 2 ], dtype = int )

print ( "Matrix a:" , a)

 

c = geek. ones ([ 3 , 3 ])

print ( "Matrix c:" , c)

Output:

 Matrix b: [1 1] Matrix a: [[1 1] [1 1]] Matrix c: [[ 1. 1. 1.] [1. 1. 1.] [1. 1. 1.]] 

Link:
https://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.ones.html
  Unlike zeros and nulls, array values ​​are not set to zero or random values, respectively. Also, these codes will not work with online ID. Please run them on your systems to see how they work.

This article is courtesy of Mohit Gupta_OMG



numpy.ones () in Python: StackOverflow Questions

Is there a list of Pytz Timezones?

I would like to know what are all the possible values for the timezone argument in the Python library pytz. How to do it?

Python strptime() and timezones?

I have a CSV dumpfile from a Blackberry IPD backup, created using IPDDump. The date/time strings in here look something like this (where EST is an Australian time-zone):

Tue Jun 22 07:46:22 EST 2010

I need to be able to parse this date in Python. At first, I tried to use the strptime() function from datettime.

>>> datetime.datetime.strptime("Tue Jun 22 12:10:20 2010 EST", "%a %b %d %H:%M:%S %Y %Z")

However, for some reason, the datetime object that comes back doesn"t seem to have any tzinfo associated with it.

I did read on this page that apparently datetime.strptime silently discards tzinfo, however, I checked the documentation, and I can"t find anything to that effect documented here.

I have been able to get the date parsed using a third-party Python library, dateutil, however I"m still curious as to how I was using the in-built strptime() incorrectly? Is there any way to get strptime() to play nicely with timezones?

Fitting empirical distribution to theoretical ones with Scipy (Python)?

INTRODUCTION: I have a list of more than 30,000 integer values ranging from 0 to 47, inclusive, e.g.[0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47,47,47,...] sampled from some continuous distribution. The values in the list are not necessarily in order, but order doesn"t matter for this problem.

PROBLEM: Based on my distribution I would like to calculate p-value (the probability of seeing greater values) for any given value. For example, as you can see p-value for 0 would be approaching 1 and p-value for higher numbers would be tending to 0.

I don"t know if I am right, but to determine probabilities I think I need to fit my data to a theoretical distribution that is the most suitable to describe my data. I assume that some kind of goodness of fit test is needed to determine the best model.

Is there a way to implement such an analysis in Python (Scipy or Numpy)? Could you present any examples?

Thank you!

Python"s many ways of string formatting — are the older ones (going to be) deprecated?

Python has at least six ways of formatting a string:

In [1]: world = "Earth"

# method 1a
In [2]: "Hello, %s" % world
Out[2]: "Hello, Earth"

# method 1b
In [3]: "Hello, %(planet)s" % {"planet": world}
Out[3]: "Hello, Earth"

# method 2a
In [4]: "Hello, {0}".format(world)
Out[4]: "Hello, Earth"

# method 2b
In [5]: "Hello, {planet}".format(planet=world)
Out[5]: "Hello, Earth"

# method 2c
In [6]: f"Hello, {world}"
Out[6]: "Hello, Earth"

In [7]: from string import Template

# method 3
In [8]: Template("Hello, $planet").substitute(planet=world)
Out[8]: "Hello, Earth"

A brief history of the different methods:

  • printf-style formatting has been around since Pythons infancy
  • The Template class was introduced in Python 2.4
  • The format method was introduced in Python 2.6
  • f-strings were introduced in Python 3.6

My questions are:

  • Is printf-style formatting deprecated or going to be deprecated?
  • In the Template class, is the substitute method deprecated or going to be deprecated? (I"m not talking about safe_substitute, which as I understand it offers unique capabilities)

Similar questions and why I think they"re not duplicates:

  • Python string formatting: % vs. .format ‚Äî treats only methods 1 and 2, and asks which one is better; my question is explicitly about deprecation in the light of the Zen of Python

  • String formatting options: pros and cons ‚Äî treats only methods 1a and 1b in the question, 1 and 2 in the answer, and also nothing about deprecation

  • advanced string formatting vs template strings ‚Äî mostly about methods 1 and 3, and doesn"t address deprecation

  • String formatting expressions (Python) ‚Äî answer mentions that the original "%" approach is planned to be deprecated. But what"s the difference between planned to be deprecated, pending deprecation and actual deprecation? And the printf-style method doesn"t raise even a PendingDeprecationWarning, so is this really going to be deprecated? This post is also quite old, so the information may be outdated.

See also

Answer #1

How to iterate over rows in a DataFrame in Pandas?

Answer: DON"T*!

Iteration in Pandas is an anti-pattern and is something you should only do when you have exhausted every other option. You should not use any function with "iter" in its name for more than a few thousand rows or you will have to get used to a lot of waiting.

Do you want to print a DataFrame? Use DataFrame.to_string().

Do you want to compute something? In that case, search for methods in this order (list modified from here):

  1. Vectorization
  2. Cython routines
  3. List Comprehensions (vanilla for loop)
  4. DataFrame.apply(): i)  Reductions that can be performed in Cython, ii) Iteration in Python space
  5. DataFrame.itertuples() and iteritems()
  6. DataFrame.iterrows()

iterrows and itertuples (both receiving many votes in answers to this question) should be used in very rare circumstances, such as generating row objects/nametuples for sequential processing, which is really the only thing these functions are useful for.

Appeal to Authority

The documentation page on iteration has a huge red warning box that says:

Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed [...].

* It"s actually a little more complicated than "don"t". df.iterrows() is the correct answer to this question, but "vectorize your ops" is the better one. I will concede that there are circumstances where iteration cannot be avoided (for example, some operations where the result depends on the value computed for the previous row). However, it takes some familiarity with the library to know when. If you"re not sure whether you need an iterative solution, you probably don"t. PS: To know more about my rationale for writing this answer, skip to the very bottom.


Faster than Looping: Vectorization, Cython

A good number of basic operations and computations are "vectorised" by pandas (either through NumPy, or through Cythonized functions). This includes arithmetic, comparisons, (most) reductions, reshaping (such as pivoting), joins, and groupby operations. Look through the documentation on Essential Basic Functionality to find a suitable vectorised method for your problem.

If none exists, feel free to write your own using custom Cython extensions.


Next Best Thing: List Comprehensions*

List comprehensions should be your next port of call if 1) there is no vectorized solution available, 2) performance is important, but not important enough to go through the hassle of cythonizing your code, and 3) you"re trying to perform elementwise transformation on your code. There is a good amount of evidence to suggest that list comprehensions are sufficiently fast (and even sometimes faster) for many common Pandas tasks.

The formula is simple,

# Iterating over one column - `f` is some function that processes your data
result = [f(x) for x in df["col"]]
# Iterating over two columns, use `zip`
result = [f(x, y) for x, y in zip(df["col1"], df["col2"])]
# Iterating over multiple columns - same data type
result = [f(row[0], ..., row[n]) for row in df[["col1", ...,"coln"]].to_numpy()]
# Iterating over multiple columns - differing data type
result = [f(row[0], ..., row[n]) for row in zip(df["col1"], ..., df["coln"])]

If you can encapsulate your business logic into a function, you can use a list comprehension that calls it. You can make arbitrarily complex things work through the simplicity and speed of raw Python code.

Caveats

List comprehensions assume that your data is easy to work with - what that means is your data types are consistent and you don"t have NaNs, but this cannot always be guaranteed.

  1. The first one is more obvious, but when dealing with NaNs, prefer in-built pandas methods if they exist (because they have much better corner-case handling logic), or ensure your business logic includes appropriate NaN handling logic.
  2. When dealing with mixed data types you should iterate over zip(df["A"], df["B"], ...) instead of df[["A", "B"]].to_numpy() as the latter implicitly upcasts data to the most common type. As an example if A is numeric and B is string, to_numpy() will cast the entire array to string, which may not be what you want. Fortunately zipping your columns together is the most straightforward workaround to this.

*Your mileage may vary for the reasons outlined in the Caveats section above.


An Obvious Example

Let"s demonstrate the difference with a simple example of adding two pandas columns A + B. This is a vectorizable operaton, so it will be easy to contrast the performance of the methods discussed above.

Benchmarking code, for your reference. The line at the bottom measures a function written in numpandas, a style of Pandas that mixes heavily with NumPy to squeeze out maximum performance. Writing numpandas code should be avoided unless you know what you"re doing. Stick to the API where you can (i.e., prefer vec over vec_numpy).

I should mention, however, that it isn"t always this cut and dry. Sometimes the answer to "what is the best method for an operation" is "it depends on your data". My advice is to test out different approaches on your data before settling on one.


Further Reading

* Pandas string methods are "vectorized" in the sense that they are specified on the series but operate on each element. The underlying mechanisms are still iterative, because string operations are inherently hard to vectorize.


Why I Wrote this Answer

A common trend I notice from new users is to ask questions of the form "How can I iterate over my df to do X?". Showing code that calls iterrows() while doing something inside a for loop. Here is why. A new user to the library who has not been introduced to the concept of vectorization will likely envision the code that solves their problem as iterating over their data to do something. Not knowing how to iterate over a DataFrame, the first thing they do is Google it and end up here, at this question. They then see the accepted answer telling them how to, and they close their eyes and run this code without ever first questioning if iteration is not the right thing to do.

The aim of this answer is to help new users understand that iteration is not necessarily the solution to every problem, and that better, faster and more idiomatic solutions could exist, and that it is worth investing time in exploring them. I"m not trying to start a war of iteration vs. vectorization, but I want new users to be informed when developing solutions to their problems with this library.

Answer #2

Placing the legend (bbox_to_anchor)

A legend is positioned inside the bounding box of the axes using the loc argument to plt.legend.
E.g. loc="upper right" places the legend in the upper right corner of the bounding box, which by default extents from (0,0) to (1,1) in axes coordinates (or in bounding box notation (x0,y0, width, height)=(0,0,1,1)).

To place the legend outside of the axes bounding box, one may specify a tuple (x0,y0) of axes coordinates of the lower left corner of the legend.

plt.legend(loc=(1.04,0))

A more versatile approach is to manually specify the bounding box into which the legend should be placed, using the bbox_to_anchor argument. One can restrict oneself to supply only the (x0, y0) part of the bbox. This creates a zero span box, out of which the legend will expand in the direction given by the loc argument. E.g.

plt.legend(bbox_to_anchor=(1.04,1), loc="upper left")

places the legend outside the axes, such that the upper left corner of the legend is at position (1.04,1) in axes coordinates.

Further examples are given below, where additionally the interplay between different arguments like mode and ncols are shown.

enter image description here

l1 = plt.legend(bbox_to_anchor=(1.04,1), borderaxespad=0)
l2 = plt.legend(bbox_to_anchor=(1.04,0), loc="lower left", borderaxespad=0)
l3 = plt.legend(bbox_to_anchor=(1.04,0.5), loc="center left", borderaxespad=0)
l4 = plt.legend(bbox_to_anchor=(0,1.02,1,0.2), loc="lower left",
                mode="expand", borderaxespad=0, ncol=3)
l5 = plt.legend(bbox_to_anchor=(1,0), loc="lower right", 
                bbox_transform=fig.transFigure, ncol=3)
l6 = plt.legend(bbox_to_anchor=(0.4,0.8), loc="upper right")

Details about how to interpret the 4-tuple argument to bbox_to_anchor, as in l4, can be found in this question. The mode="expand" expands the legend horizontally inside the bounding box given by the 4-tuple. For a vertically expanded legend, see this question.

Sometimes it may be useful to specify the bounding box in figure coordinates instead of axes coordinates. This is shown in the example l5 from above, where the bbox_transform argument is used to put the legend in the lower left corner of the figure.

Postprocessing

Having placed the legend outside the axes often leads to the undesired situation that it is completely or partially outside the figure canvas.

Solutions to this problem are:

  • Adjust the subplot parameters
    One can adjust the subplot parameters such, that the axes take less space inside the figure (and thereby leave more space to the legend) by using plt.subplots_adjust. E.g.

      plt.subplots_adjust(right=0.7)
    

leaves 30% space on the right-hand side of the figure, where one could place the legend.

  • Tight layout
    Using plt.tight_layout Allows to automatically adjust the subplot parameters such that the elements in the figure sit tight against the figure edges. Unfortunately, the legend is not taken into account in this automatism, but we can supply a rectangle box that the whole subplots area (including labels) will fit into.

      plt.tight_layout(rect=[0,0,0.75,1])
    
  • Saving the figure with bbox_inches = "tight"
    The argument bbox_inches = "tight" to plt.savefig can be used to save the figure such that all artist on the canvas (including the legend) are fit into the saved area. If needed, the figure size is automatically adjusted.

      plt.savefig("output.png", bbox_inches="tight")
    
  • automatically adjusting the subplot params
    A way to automatically adjust the subplot position such that the legend fits inside the canvas without changing the figure size can be found in this answer: Creating figure with exact size and no padding (and legend outside the axes)

Comparison between the cases discussed above:

enter image description here

Alternatives

A figure legend

One may use a legend to the figure instead of the axes, matplotlib.figure.Figure.legend. This has become especially useful for matplotlib version >=2.1, where no special arguments are needed

fig.legend(loc=7) 

to create a legend for all artists in the different axes of the figure. The legend is placed using the loc argument, similar to how it is placed inside an axes, but in reference to the whole figure - hence it will be outside the axes somewhat automatically. What remains is to adjust the subplots such that there is no overlap between the legend and the axes. Here the point "Adjust the subplot parameters" from above will be helpful. An example:

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0,2*np.pi)
colors=["#7aa0c4","#ca82e1" ,"#8bcd50","#e18882"]
fig, axes = plt.subplots(ncols=2)
for i in range(4):
    axes[i//2].plot(x,np.sin(x+i), color=colors[i],label="y=sin(x+{})".format(i))

fig.legend(loc=7)
fig.tight_layout()
fig.subplots_adjust(right=0.75)   
plt.show()

enter image description here

Legend inside dedicated subplot axes

An alternative to using bbox_to_anchor would be to place the legend in its dedicated subplot axes (lax). Since the legend subplot should be smaller than the plot, we may use gridspec_kw={"width_ratios":[4,1]} at axes creation. We can hide the axes lax.axis("off") but still put a legend in. The legend handles and labels need to obtained from the real plot via h,l = ax.get_legend_handles_labels(), and can then be supplied to the legend in the lax subplot, lax.legend(h,l). A complete example is below.

import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = 6,2

fig, (ax,lax) = plt.subplots(ncols=2, gridspec_kw={"width_ratios":[4,1]})
ax.plot(x,y, label="y=sin(x)")
....

h,l = ax.get_legend_handles_labels()
lax.legend(h,l, borderaxespad=0)
lax.axis("off")

plt.tight_layout()
plt.show()

This produces a plot, which is visually pretty similar to the plot from above:

enter image description here

We could also use the first axes to place the legend, but use the bbox_transform of the legend axes,

ax.legend(bbox_to_anchor=(0,0,1,1), bbox_transform=lax.transAxes)
lax.axis("off")

In this approach, we do not need to obtain the legend handles externally, but we need to specify the bbox_to_anchor argument.

Further reading and notes:

  • Consider the matplotlib legend guide with some examples of other stuff you want to do with legends.
  • Some example code for placing legends for pie charts may directly be found in answer to this question: Python - Legend overlaps with the pie chart
  • The loc argument can take numbers instead of strings, which make calls shorter, however, they are not very intuitively mapped to each other. Here is the mapping for reference:

enter image description here

Answer #3

It helps to install a python package foo on your machine (can also be in virtualenv) so that you can import the package foo from other projects and also from [I]Python prompts.

It does the similar job of pip, easy_install etc.,


Using setup.py

Let"s start with some definitions:

Package - A folder/directory that contains __init__.py file.
Module - A valid python file with .py extension.
Distribution - How one package relates to other packages and modules.

Let"s say you want to install a package named foo. Then you do,

$ git clone https://github.com/user/foo  
$ cd foo
$ python setup.py install

Instead, if you don"t want to actually install it but still would like to use it. Then do,

$ python setup.py develop  

This command will create symlinks to the source directory within site-packages instead of copying things. Because of this, it is quite fast (particularly for large packages).


Creating setup.py

If you have your package tree like,

foo
├── foo
│   ├── data_struct.py
│   ├── __init__.py
│   └── internals.py
├── README
├── requirements.txt
└── setup.py

Then, you do the following in your setup.py script so that it can be installed on some machine:

from setuptools import setup

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   author="Man Foo",
   author_email="[email protected]",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
)

Instead, if your package tree is more complex like the one below:

foo
├── foo
│   ├── data_struct.py
│   ├── __init__.py
│   └── internals.py
├── README
├── requirements.txt
├── scripts
│   ├── cool
│   └── skype
└── setup.py

Then, your setup.py in this case would be like:

from setuptools import setup

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   author="Man Foo",
   author_email="[email protected]",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
   scripts=[
            "scripts/cool",
            "scripts/skype",
           ]
)

Add more stuff to (setup.py) & make it decent:

from setuptools import setup

with open("README", "r") as f:
    long_description = f.read()

setup(
   name="foo",
   version="1.0",
   description="A useful module",
   license="MIT",
   long_description=long_description,
   author="Man Foo",
   author_email="[email protected]",
   url="http://www.foopackage.com/",
   packages=["foo"],  #same as name
   install_requires=["wheel", "bar", "greek"], #external packages as dependencies
   scripts=[
            "scripts/cool",
            "scripts/skype",
           ]
)

The long_description is used in pypi.org as the README description of your package.


And finally, you"re now ready to upload your package to PyPi.org so that others can install your package using pip install yourpackage.

At this point there are two options.

  • publish in the temporary test.pypi.org server to make oneself familiarize with the procedure, and then publish it on the permanent pypi.org server for the public to use your package.
  • publish straight away on the permanent pypi.org server, if you are already familiar with the procedure and have your user credentials (e.g., username, password, package name)

Once your package name is registered in pypi.org, nobody can claim or use it. Python packaging suggests the twine package for uploading purposes (of your package to PyPi). Thus,

(1) the first step is to locally build the distributions using:

# prereq: wheel (pip install wheel)  
$ python setup.py sdist bdist_wheel   

(2) then using twine for uploading either to test.pypi.org or pypi.org:

$ twine upload --repository testpypi dist/*  
username: ***  
password: ***  

It will take few minutes for the package to appear on test.pypi.org. Once you"re satisfied with it, you can then upload your package to the real & permanent index of pypi.org simply with:

$ twine upload dist/*  

Optionally, you can also sign the files in your package with a GPG by:

$ twine upload dist/* --sign 

Bonus Reading:

Answer #4

tl;dr / quick fix

  • Don"t decode/encode willy nilly
  • Don"t assume your strings are UTF-8 encoded
  • Try to convert strings to Unicode strings as soon as possible in your code
  • Fix your locale: How to solve UnicodeDecodeError in Python 3.6?
  • Don"t be tempted to use quick reload hacks

Unicode Zen in Python 2.x - The Long Version

Without seeing the source it"s difficult to know the root cause, so I"ll have to speak generally.

UnicodeDecodeError: "ascii" codec can"t decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string.

In brief, Unicode strings are an entirely separate type of Python string that does not contain any encoding. They only hold Unicode point codes and therefore can hold any Unicode point from across the entire spectrum. Strings contain encoded text, beit UTF-8, UTF-16, ISO-8895-1, GBK, Big5 etc. Strings are decoded to Unicode and Unicodes are encoded to strings. Files and text data are always transferred in encoded strings.

The Markdown module authors probably use unicode() (where the exception is thrown) as a quality gate to the rest of the code - it will convert ASCII or re-wrap existing Unicodes strings to a new Unicode string. The Markdown authors can"t know the encoding of the incoming string so will rely on you to decode strings to Unicode strings before passing to Markdown.

Unicode strings can be declared in your code using the u prefix to strings. E.g.

>>> my_u = u"my ünicôdé strįng"
>>> type(my_u)
<type "unicode">

Unicode strings may also come from file, databases and network modules. When this happens, you don"t need to worry about the encoding.

Gotchas

Conversion from str to Unicode can happen even when you don"t explicitly call unicode().

The following scenarios cause UnicodeDecodeError exceptions:

# Explicit conversion without encoding
unicode("€")

# New style format string into Unicode string
# Python will try to convert value string to Unicode first
u"The currency is: {}".format("€")

# Old style format string into Unicode string
# Python will try to convert value string to Unicode first
u"The currency is: %s" % "€"

# Append string to Unicode
# Python will try to convert string to Unicode first
u"The currency is: " + "€"         

Examples

In the following diagram, you can see how the word café has been encoded in either "UTF-8" or "Cp1252" encoding depending on the terminal type. In both examples, caf is just regular ascii. In UTF-8, é is encoded using two bytes. In "Cp1252", é is 0xE9 (which is also happens to be the Unicode point value (it"s no coincidence)). The correct decode() is invoked and conversion to a Python Unicode is successfull: Diagram of a string being converted to a Python Unicode string

In this diagram, decode() is called with ascii (which is the same as calling unicode() without an encoding given). As ASCII can"t contain bytes greater than 0x7F, this will throw a UnicodeDecodeError exception:

Diagram of a string being converted to a Python Unicode string with the wrong encoding

The Unicode Sandwich

It"s good practice to form a Unicode sandwich in your code, where you decode all incoming data to Unicode strings, work with Unicodes, then encode to strs on the way out. This saves you from worrying about the encoding of strings in the middle of your code.

Input / Decode

Source code

If you need to bake non-ASCII into your source code, just create Unicode strings by prefixing the string with a u. E.g.

u"Zürich"

To allow Python to decode your source code, you will need to add an encoding header to match the actual encoding of your file. For example, if your file was encoded as "UTF-8", you would use:

# encoding: utf-8

This is only necessary when you have non-ASCII in your source code.

Files

Usually non-ASCII data is received from a file. The io module provides a TextWrapper that decodes your file on the fly, using a given encoding. You must use the correct encoding for the file - it can"t be easily guessed. For example, for a UTF-8 file:

import io
with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file:
     my_unicode_string = my_file.read() 

my_unicode_string would then be suitable for passing to Markdown. If a UnicodeDecodeError from the read() line, then you"ve probably used the wrong encoding value.

CSV Files

The Python 2.7 CSV module does not support non-ASCII characters üò©. Help is at hand, however, with https://pypi.python.org/pypi/backports.csv.

Use it like above but pass the opened file to it:

from backports import csv
import io
with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file:
    for row in csv.reader(my_file):
        yield row

Databases

Most Python database drivers can return data in Unicode, but usually require a little configuration. Always use Unicode strings for SQL queries.

MySQL

In the connection string add:

charset="utf8",
use_unicode=True

E.g.

>>> db = MySQLdb.connect(host="localhost", user="root", passwd="passwd", db="sandbox", use_unicode=True, charset="utf8")
PostgreSQL

Add:

psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)

HTTP

Web pages can be encoded in just about any encoding. The Content-type header should contain a charset field to hint at the encoding. The content can then be decoded manually against this value. Alternatively, Python-Requests returns Unicodes in response.text.

Manually

If you must decode strings manually, you can simply do my_string.decode(encoding), where encoding is the appropriate encoding. Python 2.x supported codecs are given here: Standard Encodings. Again, if you get UnicodeDecodeError then you"ve probably got the wrong encoding.

The meat of the sandwich

Work with Unicodes as you would normal strs.

Output

stdout / printing

print writes through the stdout stream. Python tries to configure an encoder on stdout so that Unicodes are encoded to the console"s encoding. For example, if a Linux shell"s locale is en_GB.UTF-8, the output will be encoded to UTF-8. On Windows, you will be limited to an 8bit code page.

An incorrectly configured console, such as corrupt locale, can lead to unexpected print errors. PYTHONIOENCODING environment variable can force the encoding for stdout.

Files

Just like input, io.open can be used to transparently convert Unicodes to encoded byte strings.

Database

The same configuration for reading will allow Unicodes to be written directly.

Python 3

Python 3 is no more Unicode capable than Python 2.x is, however it is slightly less confused on the topic. E.g the regular str is now a Unicode string and the old str is now bytes.

The default encoding is UTF-8, so if you .decode() a byte string without giving an encoding, Python 3 uses UTF-8 encoding. This probably fixes 50% of people"s Unicode problems.

Further, open() operates in text mode by default, so returns decoded str (Unicode ones). The encoding is derived from your locale, which tends to be UTF-8 on Un*x systems or an 8-bit code page, such as windows-1251, on Windows boxes.

Why you shouldn"t use sys.setdefaultencoding("utf8")

It"s a nasty hack (there"s a reason you have to use reload) that will only mask problems and hinder your migration to Python 3.x. Understand the problem, fix the root cause and enjoy Unicode zen. See Why should we NOT use sys.setdefaultencoding("utf-8") in a py script? for further details

Answer #5

You can either Drop the columns you do not need OR Select the ones you need

# Using DataFrame.drop
df.drop(df.columns[[1, 2]], axis=1, inplace=True)

# drop by Name
df1 = df1.drop(["B", "C"], axis=1)

# Select the ones you want
df1 = df[["a","d"]]

Answer #6

Simply put, numpy.newaxis is used to increase the dimension of the existing array by one more dimension, when used once. Thus,

  • 1D array will become 2D array

  • 2D array will become 3D array

  • 3D array will become 4D array

  • 4D array will become 5D array

and so on..

Here is a visual illustration which depicts promotion of 1D array to 2D arrays.

newaxis canva visualization


Scenario-1: np.newaxis might come in handy when you want to explicitly convert a 1D array to either a row vector or a column vector, as depicted in the above picture.

Example:

# 1D array
In [7]: arr = np.arange(4)
In [8]: arr.shape
Out[8]: (4,)

# make it as row vector by inserting an axis along first dimension
In [9]: row_vec = arr[np.newaxis, :]     # arr[None, :]
In [10]: row_vec.shape
Out[10]: (1, 4)

# make it as column vector by inserting an axis along second dimension
In [11]: col_vec = arr[:, np.newaxis]     # arr[:, None]
In [12]: col_vec.shape
Out[12]: (4, 1)

Scenario-2: When we want to make use of numpy broadcasting as part of some operation, for instance while doing addition of some arrays.

Example:

Let"s say you want to add the following two arrays:

 x1 = np.array([1, 2, 3, 4, 5])
 x2 = np.array([5, 4, 3])

If you try to add these just like that, NumPy will raise the following ValueError :

ValueError: operands could not be broadcast together with shapes (5,) (3,)

In this situation, you can use np.newaxis to increase the dimension of one of the arrays so that NumPy can broadcast.

In [2]: x1_new = x1[:, np.newaxis]    # x1[:, None]
# now, the shape of x1_new is (5, 1)
# array([[1],
#        [2],
#        [3],
#        [4],
#        [5]])

Now, add:

In [3]: x1_new + x2
Out[3]:
array([[ 6,  5,  4],
       [ 7,  6,  5],
       [ 8,  7,  6],
       [ 9,  8,  7],
       [10,  9,  8]])

Alternatively, you can also add new axis to the array x2:

In [6]: x2_new = x2[:, np.newaxis]    # x2[:, None]
In [7]: x2_new     # shape is (3, 1)
Out[7]: 
array([[5],
       [4],
       [3]])

Now, add:

In [8]: x1 + x2_new
Out[8]: 
array([[ 6,  7,  8,  9, 10],
       [ 5,  6,  7,  8,  9],
       [ 4,  5,  6,  7,  8]])

Note: Observe that we get the same result in both cases (but one being the transpose of the other).


Scenario-3: This is similar to scenario-1. But, you can use np.newaxis more than once to promote the array to higher dimensions. Such an operation is sometimes needed for higher order arrays (i.e. Tensors).

Example:

In [124]: arr = np.arange(5*5).reshape(5,5)

In [125]: arr.shape
Out[125]: (5, 5)

# promoting 2D array to a 5D array
In [126]: arr_5D = arr[np.newaxis, ..., np.newaxis, np.newaxis]    # arr[None, ..., None, None]

In [127]: arr_5D.shape
Out[127]: (1, 5, 5, 1, 1)

As an alternative, you can use numpy.expand_dims that has an intuitive axis kwarg.

# adding new axes at 1st, 4th, and last dimension of the resulting array
In [131]: newaxes = (0, 3, -1)
In [132]: arr_5D = np.expand_dims(arr, axis=newaxes)
In [133]: arr_5D.shape
Out[133]: (1, 5, 5, 1, 1)

More background on np.newaxis vs np.reshape

newaxis is also called as a pseudo-index that allows the temporary addition of an axis into a multiarray.

np.newaxis uses the slicing operator to recreate the array while numpy.reshape reshapes the array to the desired layout (assuming that the dimensions match; And this is must for a reshape to happen).

Example

In [13]: A = np.ones((3,4,5,6))
In [14]: B = np.ones((4,6))
In [15]: (A + B[:, np.newaxis, :]).shape     # B[:, None, :]
Out[15]: (3, 4, 5, 6)

In the above example, we inserted a temporary axis between the first and second axes of B (to use broadcasting). A missing axis is filled-in here using np.newaxis to make the broadcasting operation work.


General Tip: You can also use None in place of np.newaxis; These are in fact the same objects.

In [13]: np.newaxis is None
Out[13]: True

P.S. Also see this great answer: newaxis vs reshape to add dimensions

Answer #7

How do I determine the size of an object in Python?

The answer, "Just use sys.getsizeof", is not a complete answer.

That answer does work for builtin objects directly, but it does not account for what those objects may contain, specifically, what types, such as custom objects, tuples, lists, dicts, and sets contain. They can contain instances each other, as well as numbers, strings and other objects.

A More Complete Answer

Using 64-bit Python 3.6 from the Anaconda distribution, with sys.getsizeof, I have determined the minimum size of the following objects, and note that sets and dicts preallocate space so empty ones don"t grow again until after a set amount (which may vary by implementation of the language):

Python 3:

Empty
Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

How do you interpret this? Well say you have a set with 10 items in it. If each item is 100 bytes each, how big is the whole data structure? The set is 736 itself because it has sized up one time to 736 bytes. Then you add the size of the items, so that"s 1736 bytes in total

Some caveats for function and class definitions:

Note each class definition has a proxy __dict__ (48 bytes) structure for class attrs. Each slot has a descriptor (like a property) in the class definition.

Slotted instances start out with 48 bytes on their first element, and increase by 8 each additional. Only empty slotted objects have 16 bytes, and an instance with no data makes very little sense.

Also, each function definition has code objects, maybe docstrings, and other possible attributes, even a __dict__.

Also note that we use sys.getsizeof() because we care about the marginal space usage, which includes the garbage collection overhead for the object, from the docs:

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Also note that resizing lists (e.g. repetitively appending to them) causes them to preallocate space, similarly to sets and dicts. From the listobj.c source code:

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won"t overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
     */
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

Historical data

Python 2.7 analysis, confirmed with guppy.hpy and sys.getsizeof:

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                    mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

Note that dictionaries (but not sets) got a more compact representation in Python 3.6

I think 8 bytes per additional item to reference makes a lot of sense on a 64 bit machine. Those 8 bytes point to the place in memory the contained item is at. The 4 bytes are fixed width for unicode in Python 2, if I recall correctly, but in Python 3, str becomes a unicode of width equal to the max width of the characters.

And for more on slots, see this answer.

A More Complete Function

We want a function that searches the elements in lists, tuples, sets, dicts, obj.__dict__"s, and obj.__slots__, as well as other things we may not have yet thought of.

We want to rely on gc.get_referents to do this search because it works at the C level (making it very fast). The downside is that get_referents can return redundant members, so we need to ensure we don"t double count.

Classes, modules, and functions are singletons - they exist one time in memory. We"re not so interested in their size, as there"s not much we can do about them - they"re a part of the program. So we"ll avoid counting them if they happen to be referenced.

We"re going to use a blacklist of types so we don"t include the entire program in our size count.

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType


def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError("getsize() does not take argument of type: "+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                seen_ids.add(id(obj))
                size += sys.getsizeof(obj)
                need_referents.append(obj)
        objects = get_referents(*need_referents)
    return size

To contrast this with the following whitelisted function, most objects know how to traverse themselves for the purposes of garbage collection (which is approximately what we"re looking for when we want to know how expensive in memory certain objects are. This functionality is used by gc.get_referents.) However, this measure is going to be much more expansive in scope than we intended if we are not careful.

For example, functions know quite a lot about the modules they are created in.

Another point of contrast is that strings that are keys in dictionaries are usually interned so they are not duplicated. Checking for id(key) will also allow us to avoid counting duplicates, which we do in the next section. The blacklist solution skips counting keys that are strings altogether.

Whitelisted Types, Recursive visitor

To cover most of these types myself, instead of relying on the gc module, I wrote this recursive function to try to estimate the size of most Python objects, including most builtins, types in the collections module, and custom types (slotted and otherwise).

This sort of function gives much more fine-grained control over the types we"re going to count for memory usage, but has the danger of leaving important types out:

import sys
from numbers import Number
from collections import deque
from collections.abc import Set, Mapping


ZERO_DEPTH_BASES = (str, bytes, Number, range, bytearray)


def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        _seen_ids.add(obj_id)
        size = sys.getsizeof(obj)
        if isinstance(obj, ZERO_DEPTH_BASES):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, "items"):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, "items")())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, "__dict__"):
            size += inner(vars(obj))
        if hasattr(obj, "__slots__"): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

And I tested it rather casually (I should unittest it):

>>> getsize(["a", tuple("bcd"), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple("bcd"))
194
>>> getsize(["a", tuple("bcd"), Foo(), {"foo": "bar", "baz": "bar"}])
752
>>> getsize({"foo": "bar", "baz": "bar"})
400
>>> getsize({})
280
>>> getsize({"foo":"bar"})
360
>>> getsize("foo")
40
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280

This implementation breaks down on class definitions and function definitions because we don"t go after all of their attributes, but since they should only exist once in memory for the process, their size really doesn"t matter too much.

Answer #8

How do I get the current time in Python?

The time module

The time module provides functions that tell us the time in "seconds since the epoch" as well as other utilities.

import time

Unix Epoch Time

This is the format you should get timestamps in for saving in databases. It is a simple floating-point number that can be converted to an integer. It is also good for arithmetic in seconds, as it represents the number of seconds since Jan 1, 1970, 00:00:00, and it is memory light relative to the other representations of time we"ll be looking at next:

>>> time.time()
1424233311.771502

This timestamp does not account for leap-seconds, so it"s not linear - leap seconds are ignored. So while it is not equivalent to the international UTC standard, it is close, and therefore quite good for most cases of record-keeping.

This is not ideal for human scheduling, however. If you have a future event you wish to take place at a certain point in time, you"ll want to store that time with a string that can be parsed into a datetime object or a serialized datetime object (these will be described later).

time.ctime

You can also represent the current time in the way preferred by your operating system (which means it can change when you change your system preferences, so don"t rely on this to be standard across all systems, as I"ve seen others expect). This is typically user friendly, but doesn"t typically result in strings one can sort chronologically:

>>> time.ctime()
"Tue Feb 17 23:21:56 2015"

You can hydrate timestamps into human readable form with ctime as well:

>>> time.ctime(1424233311.771502)
"Tue Feb 17 23:21:51 2015"

This conversion is also not good for record-keeping (except in text that will only be parsed by humans - and with improved Optical Character Recognition and Artificial Intelligence, I think the number of these cases will diminish).

datetime module

The datetime module is also quite useful here:

>>> import datetime

datetime.datetime.now

The datetime.now is a class method that returns the current time. It uses the time.localtime without the timezone info (if not given, otherwise see timezone aware below). It has a representation (which would allow you to recreate an equivalent object) echoed on the shell, but when printed (or coerced to a str), it is in human readable (and nearly ISO) format, and the lexicographic sort is equivalent to the chronological sort:

>>> datetime.datetime.now()
datetime.datetime(2015, 2, 17, 23, 43, 49, 94252)
>>> print(datetime.datetime.now())
2015-02-17 23:43:51.782461

datetime"s utcnow

You can get a datetime object in UTC time, a global standard, by doing this:

>>> datetime.datetime.utcnow()
datetime.datetime(2015, 2, 18, 4, 53, 28, 394163)
>>> print(datetime.datetime.utcnow())
2015-02-18 04:53:31.783988

UTC is a time standard that is nearly equivalent to the GMT timezone. (While GMT and UTC do not change for Daylight Savings Time, their users may switch to other timezones, like British Summer Time, during the Summer.)

datetime timezone aware

However, none of the datetime objects we"ve created so far can be easily converted to various timezones. We can solve that problem with the pytz module:

>>> import pytz
>>> then = datetime.datetime.now(pytz.utc)
>>> then
datetime.datetime(2015, 2, 18, 4, 55, 58, 753949, tzinfo=<UTC>)

Equivalently, in Python 3 we have the timezone class with a utc timezone instance attached, which also makes the object timezone aware (but to convert to another timezone without the handy pytz module is left as an exercise to the reader):

>>> datetime.datetime.now(datetime.timezone.utc)
datetime.datetime(2015, 2, 18, 22, 31, 56, 564191, tzinfo=datetime.timezone.utc)

And we see we can easily convert to timezones from the original UTC object.

>>> print(then)
2015-02-18 04:55:58.753949+00:00
>>> print(then.astimezone(pytz.timezone("US/Eastern")))
2015-02-17 23:55:58.753949-05:00

You can also make a naive datetime object aware with the pytz timezone localize method, or by replacing the tzinfo attribute (with replace, this is done blindly), but these are more last resorts than best practices:

>>> pytz.utc.localize(datetime.datetime.utcnow())
datetime.datetime(2015, 2, 18, 6, 6, 29, 32285, tzinfo=<UTC>)
>>> datetime.datetime.utcnow().replace(tzinfo=pytz.utc)
datetime.datetime(2015, 2, 18, 6, 9, 30, 728550, tzinfo=<UTC>)

The pytz module allows us to make our datetime objects timezone aware and convert the times to the hundreds of timezones available in the pytz module.

One could ostensibly serialize this object for UTC time and store that in a database, but it would require far more memory and be more prone to error than simply storing the Unix Epoch time, which I demonstrated first.

The other ways of viewing times are much more error-prone, especially when dealing with data that may come from different time zones. You want there to be no confusion as to which timezone a string or serialized datetime object was intended for.

If you"re displaying the time with Python for the user, ctime works nicely, not in a table (it doesn"t typically sort well), but perhaps in a clock. However, I personally recommend, when dealing with time in Python, either using Unix time, or a timezone aware UTC datetime object.

Answer #9

Python >= 3.5 alternative: unpack into a list literal [*newdict]

New unpacking generalizations (PEP 448) were introduced with Python 3.5 allowing you to now easily do:

>>> newdict = {1:0, 2:0, 3:0}
>>> [*newdict]
[1, 2, 3]

Unpacking with * works with any object that is iterable and, since dictionaries return their keys when iterated through, you can easily create a list by using it within a list literal.

Adding .keys() i.e [*newdict.keys()] might help in making your intent a bit more explicit though it will cost you a function look-up and invocation. (which, in all honesty, isn"t something you should really be worried about).

The *iterable syntax is similar to doing list(iterable) and its behaviour was initially documented in the Calls section of the Python Reference manual. With PEP 448 the restriction on where *iterable could appear was loosened allowing it to also be placed in list, set and tuple literals, the reference manual on Expression lists was also updated to state this.


Though equivalent to list(newdict) with the difference that it"s faster (at least for small dictionaries) because no function call is actually performed:

%timeit [*newdict]
1000000 loops, best of 3: 249 ns per loop

%timeit list(newdict)
1000000 loops, best of 3: 508 ns per loop

%timeit [k for k in newdict]
1000000 loops, best of 3: 574 ns per loop

with larger dictionaries the speed is pretty much the same (the overhead of iterating through a large collection trumps the small cost of a function call).


In a similar fashion, you can create tuples and sets of dictionary keys:

>>> *newdict,
(1, 2, 3)
>>> {*newdict}
{1, 2, 3}

beware of the trailing comma in the tuple case!

Answer #10

You can use

driver.execute_script("window.scrollTo(0, Y)") 

where Y is the height (on a fullhd monitor it"s 1080). (Thanks to @lukeis)

You can also use

driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

to scroll to the bottom of the page.

If you want to scroll to a page with infinite loading, like social network ones, facebook etc. (thanks to @Cuong Tran)

SCROLL_PAUSE_TIME = 0.5

# Get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")

while True:
    # Scroll down to bottom
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

    # Wait to load page
    time.sleep(SCROLL_PAUSE_TIME)

    # Calculate new scroll height and compare with last scroll height
    new_height = driver.execute_script("return document.body.scrollHeight")
    if new_height == last_height:
        break
    last_height = new_height

another method (thanks to Juanse) is, select an object and

label.sendKeys(Keys.PAGE_DOWN);

Get Solution for free from DataCamp guru