numpy.floor () in Python

floor | NumPy | Python Methods and Functions

numpy.floor (x [, out]) = ufunc & # 39; floor & # 39;): This math function returns the floor of the elements in an array. The sex of the scalar x is the largest integer i, so i & lt; = x.

Parameters:
a: [array_like] Input array

Return: The floor of each element.

Code # 1 : Work

# Python program explaining
# floor () function

 

import numpy as np

  

in_array = [. 5 , 1.5 , 2.5 , 3.5  , 4.5 , 10.1 ]

print ( "Input array:" , in_array)

 

flooroff_values ​​ = np.floor (in_array)

print ( "Rounded values:" , flooroff_values)

 

 

in_array = [. 53 , 1.54 ,. 71 ]

print ( "Input array:" , in_array)

 

flooroff_values ​​ = np.floor (in_array)

print ( "Rounded values:" , flooroff_values)

  

in_array = [. 5538 , 1.33354 ,. 71445 ]

print ( "Input array : " , in_array)

 

flooroff_values ​​ = np.floor (in_array)

print ( "Rounded values:" , flooroff_values)

Output:

 Input array: [0.5, 1.5, 2.5, 3.5, 4.5, 10.1] Rounded values: [0. 1. 2. 3. 4. 10.] Input array: [0.53, 1.54, 0.71] Rounded values: [0. 1. 0.] Input array: [0.5538, 1.33354, 0.71445] Rounded values: [0. 1. 0.] 

Code # 2: Work

# Python program explaining
# floor () function

import numpy as np

  

in_array = [ 1.67 , 4.5 , 7 , 9 , 12 ]

print ( "Input array:" , in_array)

 

flooroff_values ​​ = np.floor (in_array)

print ( " Rounded values: " , flooroff_values)

 

  

in_array = [ 133.000 , 344.54 , 437.56 , 44.9 , 1.2 ]

print ( "Input array:" , in_array)

 

flooroff_values ​​ = np.floor (in_array)

print ( "Rounded values ​​upto 2:" , flooroff_values)

Output:

 Input array: [1.67, 4.5, 7, 9, 12] Rounded values: [1. 4. 7. 9. 12.] Input array: [133.0, 344.54, 437.56, 44.9, 1.2] Rounded values upto 2: [133. 344. 437. 44. 1.] 

Links: https://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.floor.html#numpy.floor
,





numpy.floor () in Python: StackOverflow Questions

Why do Python"s math.ceil() and math.floor() operations return floats instead of integers?

Can someone explain this (straight from the docs- emphasis mine):

math.ceil(x) Return the ceiling of x as a float, the smallest integer value greater than or equal to x.

math.floor(x) Return the floor of x as a float, the largest integer value less than or equal to x.

Why would .ceil and .floor return floats when they are by definition supposed to calculate integers?


EDIT:

Well this got some very good arguments as to why they should return floats, and I was just getting used to the idea, when @jcollado pointed out that they in fact do return ints in Python 3...

Answer #1

Cong Ma does a good job of explaining what __getitem__ is used for - but I want to give you an example which might be useful. Imagine a class which models a building. Within the data for the building it includes a number of attributes, including descriptions of the companies that occupy each floor :

Without using __getitem__ we would have a class like this :

class Building(object):
     def __init__(self, floors):
         self._floors = [None]*floors
     def occupy(self, floor_number, data):
          self._floors[floor_number] = data
     def get_floor_data(self, floor_number):
          return self._floors[floor_number]

building1 = Building(4) # Construct a building with 4 floors
building1.occupy(0, "Reception")
building1.occupy(1, "ABC Corp")
building1.occupy(2, "DEF Inc")
print( building1.get_floor_data(2) )

We could however use __getitem__ (and its counterpart __setitem__) to make the usage of the Building class "nicer".

class Building(object):
     def __init__(self, floors):
         self._floors = [None]*floors
     def __setitem__(self, floor_number, data):
          self._floors[floor_number] = data
     def __getitem__(self, floor_number):
          return self._floors[floor_number]

building1 = Building(4) # Construct a building with 4 floors
building1[0] = "Reception"
building1[1] = "ABC Corp"
building1[2] = "DEF Inc"
print( building1[2] )

Whether you use __setitem__ like this really depends on how you plan to abstract your data - in this case we have decided to treat a building as a container of floors (and you could also implement an iterator for the Building, and maybe even the ability to slice - i.e. get more than one floor"s data at a time - it depends on what you need.

Answer #2

What you have is a float literal without the trailing zero, which you then access the __truediv__ method of. It"s not an operator in itself; the first dot is part of the float value, and the second is the dot operator to access the objects properties and methods.

You can reach the same point by doing the following.

>>> f = 1.
>>> f
1.0
>>> f.__floordiv__
<method-wrapper "__floordiv__" of float object at 0x7f9fb4dc1a20>

Another example

>>> 1..__add__(2.)
3.0

Here we add 1.0 to 2.0, which obviously yields 3.0.

Answer #3

How can I force division to be floating point in Python?

I have two integer values a and b, but I need their ratio in floating point. I know that a < b and I want to calculate a/b, so if I use integer division I"ll always get 0 with a remainder of a.

How can I force c to be a floating point number in Python in the following?

c = a / b

What is really being asked here is:

"How do I force true division such that a / b will return a fraction?"

Upgrade to Python 3

In Python 3, to get true division, you simply do a / b.

>>> 1/2
0.5

Floor division, the classic division behavior for integers, is now a // b:

>>> 1//2
0
>>> 1//2.0
0.0

However, you may be stuck using Python 2, or you may be writing code that must work in both 2 and 3.

If Using Python 2

In Python 2, it"s not so simple. Some ways of dealing with classic Python 2 division are better and more robust than others.

Recommendation for Python 2

You can get Python 3 division behavior in any given module with the following import at the top:

from __future__ import division

which then applies Python 3 style division to the entire module. It also works in a python shell at any given point. In Python 2:

>>> from __future__ import division
>>> 1/2
0.5
>>> 1//2
0
>>> 1//2.0
0.0

This is really the best solution as it ensures the code in your module is more forward compatible with Python 3.

Other Options for Python 2

If you don"t want to apply this to the entire module, you"re limited to a few workarounds. The most popular is to coerce one of the operands to a float. One robust solution is a / (b * 1.0). In a fresh Python shell:

>>> 1/(2 * 1.0)
0.5

Also robust is truediv from the operator module operator.truediv(a, b), but this is likely slower because it"s a function call:

>>> from operator import truediv
>>> truediv(1, 2)
0.5

Not Recommended for Python 2

Commonly seen is a / float(b). This will raise a TypeError if b is a complex number. Since division with complex numbers is defined, it makes sense to me to not have division fail when passed a complex number for the divisor.

>>> 1 / float(2)
0.5
>>> 1 / float(2j)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: can"t convert complex to float

It doesn"t make much sense to me to purposefully make your code more brittle.

You can also run Python with the -Qnew flag, but this has the downside of executing all modules with the new Python 3 behavior, and some of your modules may expect classic division, so I don"t recommend this except for testing. But to demonstrate:

$ python -Qnew -c "print 1/2"
0.5
$ python -Qnew -c "print 1/2j"
-0.5j

Answer #4

This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in the source code of Python"s integer implementation.

Because integers in Python are arbitrary-precision, they are stored as arrays of integer "digits", with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple "digits". In pyport.h, this bit limit is defined as 30 bits on 64-bit platform, or 15 bits otherwise. (I"ll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark"s result would depend on if x were less than 32,768 or not.)

When an operation"s inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the integer multiplication implementation is as follows:

static PyObject *
long_mul(PyLongObject *a, PyLongObject *b)
{
    PyLongObject *z;

    CHECK_BINOP(a, b);

    /* fast path for single-digit multiplication */
    if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
        stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
#ifdef HAVE_LONG_LONG
        return PyLong_FromLongLong((PY_LONG_LONG)v);
#else
        /* if we don"t have long long then we"re almost certainly
           using 15-bit digits, so v will fit in a long.  In the
           unlikely event that we"re using 30-bit digits on a platform
           without long long, a large v will just cause us to fall
           through to the general multiplication code below. */
        if (v >= LONG_MIN && v <= LONG_MAX)
            return PyLong_FromLong((long)v);
#endif
    }

So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. (MEDIUM_VALUE() called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, PyLong_FromLongLong() will notice this in a relatively small number of operations, and create a single-digit integer object to store it.

In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for long_lshift(), in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: (my comments in /*** ***/)

static PyObject *
long_lshift(PyObject *v, PyObject *w)
{
    /*** ... ***/

    wordshift = shiftby / PyLong_SHIFT;   /*** zero for small w ***/
    remshift  = shiftby - wordshift * PyLong_SHIFT;   /*** w for small w ***/

    oldsize = Py_ABS(Py_SIZE(a));   /*** 1 for small v > 0 ***/
    newsize = oldsize + wordshift;
    if (remshift)
        ++newsize;   /*** here newsize becomes at least 2 for w > 0, v > 0 ***/
    z = _PyLong_New(newsize);

    /*** ... ***/
}

Integer division

You didn"t ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every // computes both the quotient and the remainder using the function long_divrem(). This remainder is computed for a small divisor with a multiplication, and is stored in a newly-allocated integer object, which in this situation is immediately discarded.

Answer #5

What does the “at” (@) symbol do in Python?

In short, it is used in decorator syntax and for matrix multiplication.

In the context of decorators, this syntax:

@decorator
def decorated_function():
    """this function is decorated"""

is equivalent to this:

def decorated_function():
    """this function is decorated"""

decorated_function = decorator(decorated_function)

In the context of matrix multiplication, a @ b invokes a.__matmul__(b) - making this syntax:

a @ b

equivalent to

dot(a, b)

and

a @= b

equivalent to

a = dot(a, b)

where dot is, for example, the numpy matrix multiplication function and a and b are matrices.

How could you discover this on your own?

I also do not know what to search for as searching Python docs or Google does not return relevant results when the @ symbol is included.

If you want to have a rather complete view of what a particular piece of python syntax does, look directly at the grammar file. For the Python 3 branch:

~$ grep -C 1 "@" cpython/Grammar/Grammar 

decorator: "@" dotted_name [ "(" [arglist] ")" ] NEWLINE
decorators: decorator+
--
testlist_star_expr: (test|star_expr) ("," (test|star_expr))* [","]
augassign: ("+=" | "-=" | "*=" | "@=" | "/=" | "%=" | "&=" | "|=" | "^=" |
            "<<=" | ">>=" | "**=" | "//=")
--
arith_expr: term (("+"|"-") term)*
term: factor (("*"|"@"|"/"|"%"|"//") factor)*
factor: ("+"|"-"|"~") factor | power

We can see here that @ is used in three contexts:

  • decorators
  • an operator between factors
  • an augmented assignment operator

Decorator Syntax:

A google search for "decorator python docs" gives as one of the top results, the "Compound Statements" section of the "Python Language Reference." Scrolling down to the section on function definitions, which we can find by searching for the word, "decorator", we see that... there"s a lot to read. But the word, "decorator" is a link to the glossary, which tells us:

decorator

A function returning another function, usually applied as a function transformation using the @wrapper syntax. Common examples for decorators are classmethod() and staticmethod().

The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:

def f(...):
    ...
f = staticmethod(f)

@staticmethod
def f(...):
    ...

The same concept exists for classes, but is less commonly used there. See the documentation for function definitions and class definitions for more about decorators.

So, we see that

@foo
def bar():
    pass

is semantically the same as:

def bar():
    pass

bar = foo(bar)

They are not exactly the same because Python evaluates the foo expression (which could be a dotted lookup and a function call) before bar with the decorator (@) syntax, but evaluates the foo expression after bar in the other case.

(If this difference makes a difference in the meaning of your code, you should reconsider what you"re doing with your life, because that would be pathological.)

Stacked Decorators

If we go back to the function definition syntax documentation, we see:

@f1(arg)
@f2
def func(): pass

is roughly equivalent to

def func(): pass
func = f1(arg)(f2(func))

This is a demonstration that we can call a function that"s a decorator first, as well as stack decorators. Functions, in Python, are first class objects - which means you can pass a function as an argument to another function, and return functions. Decorators do both of these things.

If we stack decorators, the function, as defined, gets passed first to the decorator immediately above it, then the next, and so on.

That about sums up the usage for @ in the context of decorators.

The Operator, @

In the lexical analysis section of the language reference, we have a section on operators, which includes @, which makes it also an operator:

The following tokens are operators:

+       -       *       **      /       //      %      @
<<      >>      &       |       ^       ~
<       >       <=      >=      ==      !=

and in the next page, the Data Model, we have the section Emulating Numeric Types,

object.__add__(self, other)
object.__sub__(self, other) 
object.__mul__(self, other) 
object.__matmul__(self, other) 
object.__truediv__(self, other) 
object.__floordiv__(self, other)

[...] These methods are called to implement the binary arithmetic operations (+, -, *, @, /, //, [...]

And we see that __matmul__ corresponds to @. If we search the documentation for "matmul" we get a link to What"s new in Python 3.5 with "matmul" under a heading "PEP 465 - A dedicated infix operator for matrix multiplication".

it can be implemented by defining __matmul__(), __rmatmul__(), and __imatmul__() for regular, reflected, and in-place matrix multiplication.

(So now we learn that @= is the in-place version). It further explains:

Matrix multiplication is a notably common operation in many fields of mathematics, science, engineering, and the addition of @ allows writing cleaner code:

S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)

instead of:

S = dot((dot(H, beta) - r).T,
        dot(inv(dot(dot(H, V), H.T)), dot(H, beta) - r))

While this operator can be overloaded to do almost anything, in numpy, for example, we would use this syntax to calculate the inner and outer product of arrays and matrices:

>>> from numpy import array, matrix
>>> array([[1,2,3]]).T @ array([[1,2,3]])
array([[1, 2, 3],
       [2, 4, 6],
       [3, 6, 9]])
>>> array([[1,2,3]]) @ array([[1,2,3]]).T
array([[14]])
>>> matrix([1,2,3]).T @ matrix([1,2,3])
matrix([[1, 2, 3],
        [2, 4, 6],
        [3, 6, 9]])
>>> matrix([1,2,3]) @ matrix([1,2,3]).T
matrix([[14]])

Inplace matrix multiplication: @=

While researching the prior usage, we learn that there is also the inplace matrix multiplication. If we attempt to use it, we may find it is not yet implemented for numpy:

>>> m = matrix([1,2,3])
>>> m @= m.T
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: In-place matrix multiplication is not (yet) supported. Use "a = a @ b" instead of "a @= b".

When it is implemented, I would expect the result to look like this:

>>> m = matrix([1,2,3])
>>> m @= m.T
>>> m
matrix([[14]])

Answer #6

The golden spiral method

You said you couldn’t get the golden spiral method to work and that’s a shame because it’s really, really good. I would like to give you a complete understanding of it so that maybe you can understand how to keep this away from being “bunched up.”

So here’s a fast, non-random way to create a lattice that is approximately correct; as discussed above, no lattice will be perfect, but this may be good enough. It is compared to other methods e.g. at BendWavy.org but it just has a nice and pretty look as well as a guarantee about even spacing in the limit.

Primer: sunflower spirals on the unit disk

To understand this algorithm, I first invite you to look at the 2D sunflower spiral algorithm. This is based on the fact that the most irrational number is the golden ratio (1 + sqrt(5))/2 and if one emits points by the approach “stand at the center, turn a golden ratio of whole turns, then emit another point in that direction,” one naturally constructs a spiral which, as you get to higher and higher numbers of points, nevertheless refuses to have well-defined ‘bars’ that the points line up on.(Note 1.)

The algorithm for even spacing on a disk is,

from numpy import pi, cos, sin, sqrt, arange
import matplotlib.pyplot as pp

num_pts = 100
indices = arange(0, num_pts, dtype=float) + 0.5

r = sqrt(indices/num_pts)
theta = pi * (1 + 5**0.5) * indices

pp.scatter(r*cos(theta), r*sin(theta))
pp.show()

and it produces results that look like (n=100 and n=1000):

enter image description here

Spacing the points radially

The key strange thing is the formula r = sqrt(indices / num_pts); how did I come to that one? (Note 2.)

Well, I am using the square root here because I want these to have even-area spacing around the disk. That is the same as saying that in the limit of large N I want a little region R ∈ (r, r + dr), Θ ∈ (θ, θ + dθ) to contain a number of points proportional to its area, which is r dr dθ. Now if we pretend that we are talking about a random variable here, this has a straightforward interpretation as saying that the joint probability density for (R, Θ) is just c r for some constant c. Normalization on the unit disk would then force c = 1/π.

Now let me introduce a trick. It comes from probability theory where it’s known as sampling the inverse CDF: suppose you wanted to generate a random variable with a probability density f(z) and you have a random variable U ~ Uniform(0, 1), just like comes out of random() in most programming languages. How do you do this?

  1. First, turn your density into a cumulative distribution function or CDF, which we will call F(z). A CDF, remember, increases monotonically from 0 to 1 with derivative f(z).
  2. Then calculate the CDF’s inverse function F-1(z).
  3. You will find that Z = F-1(U) is distributed according to the target density. (Note 3).

Now the golden-ratio spiral trick spaces the points out in a nicely even pattern for θ so let’s integrate that out; for the unit disk we are left with F(r) = r2. So the inverse function is F-1(u) = u1/2, and therefore we would generate random points on the disk in polar coordinates with r = sqrt(random()); theta = 2 * pi * random().

Now instead of randomly sampling this inverse function we’re uniformly sampling it, and the nice thing about uniform sampling is that our results about how points are spread out in the limit of large N will behave as if we had randomly sampled it. This combination is the trick. Instead of random() we use (arange(0, num_pts, dtype=float) + 0.5)/num_pts, so that, say, if we want to sample 10 points they are r = 0.05, 0.15, 0.25, ... 0.95. We uniformly sample r to get equal-area spacing, and we use the sunflower increment to avoid awful “bars” of points in the output.

Now doing the sunflower on a sphere

The changes that we need to make to dot the sphere with points merely involve switching out the polar coordinates for spherical coordinates. The radial coordinate of course doesn"t enter into this because we"re on a unit sphere. To keep things a little more consistent here, even though I was trained as a physicist I"ll use mathematicians" coordinates where 0 ≤ φ ≤ π is latitude coming down from the pole and 0 ≤ θ ≤ 2π is longitude. So the difference from above is that we are basically replacing the variable r with φ.

Our area element, which was r dr dθ, now becomes the not-much-more-complicated sin(φ) dφ dθ. So our joint density for uniform spacing is sin(φ)/4π. Integrating out θ, we find f(φ) = sin(φ)/2, thus F(φ) = (1 − cos(φ))/2. Inverting this we can see that a uniform random variable would look like acos(1 - 2 u), but we sample uniformly instead of randomly, so we instead use φk = acos(1 − 2 (k + 0.5)/N). And the rest of the algorithm is just projecting this onto the x, y, and z coordinates:

from numpy import pi, cos, sin, arccos, arange
import mpl_toolkits.mplot3d
import matplotlib.pyplot as pp

num_pts = 1000
indices = arange(0, num_pts, dtype=float) + 0.5

phi = arccos(1 - 2*indices/num_pts)
theta = pi * (1 + 5**0.5) * indices

x, y, z = cos(theta) * sin(phi), sin(theta) * sin(phi), cos(phi);

pp.figure().add_subplot(111, projection="3d").scatter(x, y, z);
pp.show()

Again for n=100 and n=1000 the results look like: enter image description here enter image description here

Further research

I wanted to give a shout out to Martin Roberts’s blog. Note that above I created an offset of my indices by adding 0.5 to each index. This was just visually appealing to me, but it turns out that the choice of offset matters a lot and is not constant over the interval and can mean getting as much as 8% better accuracy in packing if chosen correctly. There should also be a way to get his R2 sequence to cover a sphere and it would be interesting to see if this also produced a nice even covering, perhaps as-is but perhaps needing to be, say, taken from only a half of the unit square cut diagonally or so and stretched around to get a circle.

Notes

  1. Those “bars” are formed by rational approximations to a number, and the best rational approximations to a number come from its continued fraction expression, z + 1/(n_1 + 1/(n_2 + 1/(n_3 + ...))) where z is an integer and n_1, n_2, n_3, ... is either a finite or infinite sequence of positive integers:

    def continued_fraction(r):
        while r != 0:
            n = floor(r)
            yield n
            r = 1/(r - n)
    

    Since the fraction part 1/(...) is always between zero and one, a large integer in the continued fraction allows for a particularly good rational approximation: “one divided by something between 100 and 101” is better than “one divided by something between 1 and 2.” The most irrational number is therefore the one which is 1 + 1/(1 + 1/(1 + ...)) and has no particularly good rational approximations; one can solve φ = 1 + 1/φ by multiplying through by φ to get the formula for the golden ratio.

  2. For folks who are not so familiar with NumPy -- all of the functions are “vectorized,” so that sqrt(array) is the same as what other languages might write map(sqrt, array). So this is a component-by-component sqrt application. The same also holds for division by a scalar or addition with scalars -- those apply to all components in parallel.

  3. The proof is simple once you know that this is the result. If you ask what"s the probability that z < Z < z + dz, this is the same as asking what"s the probability that z < F-1(U) < z + dz, apply F to all three expressions noting that it is a monotonically increasing function, hence F(z) < U < F(z + dz), expand the right hand side out to find F(z) + f(z) dz, and since U is uniform this probability is just f(z) dz as promised.

Answer #7

Let"s visualize (you gonna remember always), enter image description here

In Pandas:

  1. axis=0 means along "indexes". It"s a row-wise operation.

Suppose, to perform concat() operation on dataframe1 & dataframe2, we will take dataframe1 & take out 1st row from dataframe1 and place into the new DF, then we take out another row from dataframe1 and put into new DF, we repeat this process until we reach to the bottom of dataframe1. Then, we do the same process for dataframe2.

Basically, stacking dataframe2 on top of dataframe1 or vice a versa.

E.g making a pile of books on a table or floor

  1. axis=1 means along "columns". It"s a column-wise operation.

Suppose, to perform concat() operation on dataframe1 & dataframe2, we will take out the 1st complete column(a.k.a 1st series) of dataframe1 and place into new DF, then we take out the second column of dataframe1 and keep adjacent to it (sideways), we have to repeat this operation until all columns are finished. Then, we repeat the same process on dataframe2. Basically, stacking dataframe2 sideways.

E.g arranging books on a bookshelf.

More to it, since arrays are better representations to represent a nested n-dimensional structure compared to matrices! so below can help you more to visualize how axis plays an important role when you generalize to more than one dimension. Also, you can actually print/write/draw/visualize any n-dim array but, writing or visualizing the same in a matrix representation(3-dim) is impossible on a paper more than 3-dimensions.

enter image description here

Answer #8

You can make the observation that for a string to be considered repeating, its length must be divisible by the length of its repeated sequence. Given that, here is a solution that generates divisors of the length from 1 to n / 2 inclusive, divides the original string into substrings with the length of the divisors, and tests the equality of the result set:

from math import sqrt, floor

def divquot(n):
    if n > 1:
        yield 1, n
    swapped = []
    for d in range(2, int(floor(sqrt(n))) + 1):
        q, r = divmod(n, d)
        if r == 0:
            yield d, q
            swapped.append((q, d))
    while swapped:
        yield swapped.pop()

def repeats(s):
    n = len(s)
    for d, q in divquot(n):
        sl = s[0:d]
        if sl * q == s:
            return sl
    return None

EDIT: In Python 3, the / operator has changed to do float division by default. To get the int division from Python 2, you can use the // operator instead. Thank you to @TigerhawkT3 for bringing this to my attention.

The // operator performs integer division in both Python 2 and Python 3, so I"ve updated the answer to support both versions. The part where we test to see if all the substrings are equal is now a short-circuiting operation using all and a generator expression.

UPDATE: In response to a change in the original question, the code has now been updated to return the smallest repeating substring if it exists and None if it does not. @godlygeek has suggested using divmod to reduce the number of iterations on the divisors generator, and the code has been updated to match that as well. It now returns all positive divisors of n in ascending order, exclusive of n itself.

Further update for high performance: After multiple tests, I"ve come to the conclusion that simply testing for string equality has the best performance out of any slicing or iterator solution in Python. Thus, I"ve taken a leaf out of @TigerhawkT3 "s book and updated my solution. It"s now over 6x as fast as before, noticably faster than Tigerhawk"s solution but slower than David"s.

Answer #9

float ‚Üí float math.log2(x)

import math

log2 = math.log(x, 2.0)
log2 = math.log2(x)   # python 3.3 or later

float ‚Üí int math.frexp(x)

If all you need is the integer part of log base 2 of a floating point number, extracting the exponent is pretty efficient:

log2int_slow = int(math.floor(math.log(x, 2.0)))    # these give the
log2int_fast = math.frexp(x)[1] - 1                 # same result
  • Python frexp() calls the C function frexp() which just grabs and tweaks the exponent.

  • Python frexp() returns a tuple (mantissa, exponent). So [1] gets the exponent part.

  • For integral powers of 2 the exponent is one more than you might expect. For example 32 is stored as 0.5x2‚Å∂. This explains the - 1 above. Also works for 1/32 which is stored as 0.5x2‚Ū‚Å¥.

  • Floors toward negative infinity, so log‚ÇÇ31 computed this way is 4 not 5. log‚ÇÇ(1/17) is -5 not -4.


int ‚Üí int x.bit_length()

If both input and output are integers, this native integer method could be very efficient:

log2int_faster = x.bit_length() - 1
  • - 1 because 2‚Åø requires n+1 bits. Works for very large integers, e.g. 2**10000.

  • Floors toward negative infinity, so log‚ÇÇ31 computed this way is 4 not 5.

Answer #10

Most answers suggested round or format. round sometimes rounds up, and in my case I needed the value of my variable to be rounded down and not just displayed as such.

round(2.357, 2)  # -> 2.36

I found the answer here: How do I round a floating point number up to a certain decimal place?

import math
v = 2.357
print(math.ceil(v*100)/100)  # -> 2.36
print(math.floor(v*100)/100)  # -> 2.35

or:

from math import floor, ceil

def roundDown(n, d=8):
    d = int("1" + ("0" * d))
    return floor(n * d) / d

def roundUp(n, d=8):
    d = int("1" + ("0" * d))
    return ceil(n * d) / d

Get Solution for free from DataCamp guru