floor | NumPy | Python Methods and Functions

** **

** **

` `

can strip a specific value provided as a parameter in ** Numpy numpy.ndarray .__ floordiv __ () ** ` ndarray .__ floordiv __ () `

. The value will be split into each ** array ** element and remember that it always gives the ** minimum value ** after division.

Syntax: ndarray .__ floordiv __ ($ self, value, /)

Return:self // value

** Example # 1 : **

In this example, we see that each element in the array is divisible with the value given as a parameter in the ` ndarray .__ floordiv __ () `

method and gives the minimum value of each element that split in an array. This method will work well for positive, negative and floating point array values.

` ` |

** Output: **

[0. 1. 1. 2. 2.]

** Example # 2: **

` ` |

** Exit :**

[[0. 0. 1. 1. 1.] [2. 1. 1. 1. 0.]]

Can someone explain this (straight from the docs- emphasis mine):

math.ceil(x)Return the ceiling of xas a float, the smallestintegervalue greater than or equal to x.

math.floor(x)Return the floor of xas a float, the largestintegervalue less than or equal to x.

Why would `.ceil`

and `.floor`

return floats when they are by definition supposed to calculate integers?

**EDIT:**

Well this got some very good arguments as to why they *should* return floats, and I was just getting used to the idea, when @jcollado pointed out that they in fact *do* return ints in Python 3...

Cong Ma does a good job of explaining what `__getitem__`

is used for - but I want to give you an example which might be useful.
Imagine a class which models a building. Within the data for the building it includes a number of attributes, including descriptions of the companies that occupy each floor :

Without using `__getitem__`

we would have a class like this :

```
class Building(object):
def __init__(self, floors):
self._floors = [None]*floors
def occupy(self, floor_number, data):
self._floors[floor_number] = data
def get_floor_data(self, floor_number):
return self._floors[floor_number]
building1 = Building(4) # Construct a building with 4 floors
building1.occupy(0, "Reception")
building1.occupy(1, "ABC Corp")
building1.occupy(2, "DEF Inc")
print( building1.get_floor_data(2) )
```

We could however use `__getitem__`

(and its counterpart `__setitem__`

) to make the usage of the Building class "nicer".

```
class Building(object):
def __init__(self, floors):
self._floors = [None]*floors
def __setitem__(self, floor_number, data):
self._floors[floor_number] = data
def __getitem__(self, floor_number):
return self._floors[floor_number]
building1 = Building(4) # Construct a building with 4 floors
building1[0] = "Reception"
building1[1] = "ABC Corp"
building1[2] = "DEF Inc"
print( building1[2] )
```

Whether you use `__setitem__`

like this really depends on how you plan to abstract your data - in this case we have decided to treat a building as a container of floors (and you could also implement an iterator for the Building, and maybe even the ability to slice - i.e. get more than one floor"s data at a time - it depends on what you need.

What you have is a `float`

literal without the trailing zero, which you then access the `__truediv__`

method of. It"s not an operator in itself; the first dot is part of the float value, and the second is the dot operator to access the objects properties and methods.

You can reach the same point by doing the following.

```
>>> f = 1.
>>> f
1.0
>>> f.__floordiv__
<method-wrapper "__floordiv__" of float object at 0x7f9fb4dc1a20>
```

Another example

```
>>> 1..__add__(2.)
3.0
```

Here we add 1.0 to 2.0, which obviously yields 3.0.

## How can I force division to be floating point in Python?

I have two integer values a and b, but I need their ratio in floating point. I know that a < b and I want to calculate a/b, so if I use integer division I"ll always get 0 with a remainder of a.

How can I force c to be a floating point number in Python in the following?

`c = a / b`

What is really being asked here is:

"How do I force true division such that `a / b`

will return a fraction?"

In Python 3, to get true division, you simply do `a / b`

.

```
>>> 1/2
0.5
```

Floor division, the classic division behavior for integers, is now `a // b`

:

```
>>> 1//2
0
>>> 1//2.0
0.0
```

However, you may be stuck using Python 2, or you may be writing code that must work in both 2 and 3.

In Python 2, it"s not so simple. Some ways of dealing with classic Python 2 division are better and more robust than others.

You can get Python 3 division behavior in any given module with the following import at the top:

```
from __future__ import division
```

which then applies Python 3 style division to the entire module. It also works in a python shell at any given point. In Python 2:

```
>>> from __future__ import division
>>> 1/2
0.5
>>> 1//2
0
>>> 1//2.0
0.0
```

This is really the best solution as it ensures the code in your module is more forward compatible with Python 3.

If you don"t want to apply this to the entire module, you"re limited to a few workarounds. The most popular is to coerce one of the operands to a float. One robust solution is `a / (b * 1.0)`

. In a fresh Python shell:

```
>>> 1/(2 * 1.0)
0.5
```

Also robust is `truediv`

from the `operator`

module `operator.truediv(a, b)`

, but this is likely slower because it"s a function call:

```
>>> from operator import truediv
>>> truediv(1, 2)
0.5
```

Commonly seen is `a / float(b)`

. This will raise a TypeError if b is a complex number. Since division with complex numbers is defined, it makes sense to me to not have division fail when passed a complex number for the divisor.

```
>>> 1 / float(2)
0.5
>>> 1 / float(2j)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can"t convert complex to float
```

It doesn"t make much sense to me to purposefully make your code more brittle.

You can also run Python with the `-Qnew`

flag, but this has the downside of executing all modules with the new Python 3 behavior, and some of your modules may expect classic division, so I don"t recommend this except for testing. But to demonstrate:

```
$ python -Qnew -c "print 1/2"
0.5
$ python -Qnew -c "print 1/2j"
-0.5j
```

This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in the source code of Python"s integer implementation.

Because integers in Python are arbitrary-precision, they are stored as arrays of integer "digits", with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple "digits". In *pyport.h*, this bit limit is defined as 30 bits on 64-bit platform, or 15 bits otherwise. (I"ll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark"s result would depend on if `x`

were less than 32,768 or not.)

When an operation"s inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the integer multiplication implementation is as follows:

```
static PyObject *
long_mul(PyLongObject *a, PyLongObject *b)
{
PyLongObject *z;
CHECK_BINOP(a, b);
/* fast path for single-digit multiplication */
if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
#ifdef HAVE_LONG_LONG
return PyLong_FromLongLong((PY_LONG_LONG)v);
#else
/* if we don"t have long long then we"re almost certainly
using 15-bit digits, so v will fit in a long. In the
unlikely event that we"re using 30-bit digits on a platform
without long long, a large v will just cause us to fall
through to the general multiplication code below. */
if (v >= LONG_MIN && v <= LONG_MAX)
return PyLong_FromLong((long)v);
#endif
}
```

So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. (`MEDIUM_VALUE()`

called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, `PyLong_FromLongLong()`

will notice this in a relatively small number of operations, and create a single-digit integer object to store it.

In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for `long_lshift()`

, in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: *(my comments in /*** ***/)*

```
static PyObject *
long_lshift(PyObject *v, PyObject *w)
{
/*** ... ***/
wordshift = shiftby / PyLong_SHIFT; /*** zero for small w ***/
remshift = shiftby - wordshift * PyLong_SHIFT; /*** w for small w ***/
oldsize = Py_ABS(Py_SIZE(a)); /*** 1 for small v > 0 ***/
newsize = oldsize + wordshift;
if (remshift)
++newsize; /*** here newsize becomes at least 2 for w > 0, v > 0 ***/
z = _PyLong_New(newsize);
/*** ... ***/
}
```

You didn"t ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every `//`

computes both the quotient *and* the remainder using the function `long_divrem()`

. This remainder is computed for a small divisor with a multiplication, and is stored in a newly-allocated integer object, which in this situation is immediately discarded.

In short, it is used in decorator syntax and for matrix multiplication.

In the context of decorators, this syntax:

```
@decorator
def decorated_function():
"""this function is decorated"""
```

is equivalent to this:

```
def decorated_function():
"""this function is decorated"""
decorated_function = decorator(decorated_function)
```

In the context of matrix multiplication, `a @ b`

invokes `a.__matmul__(b)`

- making this syntax:

```
a @ b
```

equivalent to

```
dot(a, b)
```

and

```
a @= b
```

equivalent to

```
a = dot(a, b)
```

where `dot`

is, for example, the numpy matrix multiplication function and `a`

and `b`

are matrices.

I also do not know what to search for as searching Python docs or Google does not return relevant results when the @ symbol is included.

If you want to have a rather complete view of what a particular piece of python syntax does, look directly at the grammar file. For the Python 3 branch:

```
~$ grep -C 1 "@" cpython/Grammar/Grammar
decorator: "@" dotted_name [ "(" [arglist] ")" ] NEWLINE
decorators: decorator+
--
testlist_star_expr: (test|star_expr) ("," (test|star_expr))* [","]
augassign: ("+=" | "-=" | "*=" | "@=" | "/=" | "%=" | "&=" | "|=" | "^=" |
"<<=" | ">>=" | "**=" | "//=")
--
arith_expr: term (("+"|"-") term)*
term: factor (("*"|"@"|"/"|"%"|"//") factor)*
factor: ("+"|"-"|"~") factor | power
```

We can see here that `@`

is used in three contexts:

- decorators
- an operator between factors
- an augmented assignment operator

A google search for "decorator python docs" gives as one of the top results, the "Compound Statements" section of the "Python Language Reference." Scrolling down to the section on function definitions, which we can find by searching for the word, "decorator", we see that... there"s a lot to read. But the word, "decorator" is a link to the glossary, which tells us:

## decorator

A function returning another function, usually applied as a function transformation using the

`@wrapper`

syntax. Common examples for decorators are`classmethod()`

and`staticmethod()`

.The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:

`def f(...): ... f = staticmethod(f) @staticmethod def f(...): ...`

The same concept exists for classes, but is less commonly used there. See the documentation for function definitions and class definitions for more about decorators.

So, we see that

```
@foo
def bar():
pass
```

is semantically the same as:

```
def bar():
pass
bar = foo(bar)
```

They are not exactly the same because Python evaluates the foo expression (which could be a dotted lookup and a function call) before bar with the decorator (`@`

) syntax, but evaluates the foo expression *after* bar in the other case.

_{(If this difference makes a difference in the meaning of your code, you should reconsider what you"re doing with your life, because that would be pathological.)}

If we go back to the function definition syntax documentation, we see:

`@f1(arg) @f2 def func(): pass`

is roughly equivalent to

`def func(): pass func = f1(arg)(f2(func))`

This is a demonstration that we can call a function that"s a decorator first, as well as stack decorators. Functions, in Python, are first class objects - which means you can pass a function as an argument to another function, and return functions. Decorators do both of these things.

If we stack decorators, the function, as defined, gets passed first to the decorator immediately above it, then the next, and so on.

That about sums up the usage for `@`

in the context of decorators.

`@`

In the lexical analysis section of the language reference, we have a section on operators, which includes `@`

, which makes it also an operator:

The following tokens are operators:

`+ - * ** / // % @ << >> & | ^ ~ < > <= >= == !=`

and in the next page, the Data Model, we have the section Emulating Numeric Types,

`object.__add__(self, other) object.__sub__(self, other) object.__mul__(self, other) object.__matmul__(self, other) object.__truediv__(self, other) object.__floordiv__(self, other)`

[...] These methods are called to implement the binary arithmetic operations (

`+`

,`-`

,`*`

,`@`

,`/`

,`//`

, [...]

And we see that `__matmul__`

corresponds to `@`

. If we search the documentation for "matmul" we get a link to What"s new in Python 3.5 with "matmul" under a heading "PEP 465 - A dedicated infix operator for matrix multiplication".

it can be implemented by defining

`__matmul__()`

,`__rmatmul__()`

, and`__imatmul__()`

for regular, reflected, and in-place matrix multiplication.

(So now we learn that `@=`

is the in-place version). It further explains:

Matrix multiplication is a notably common operation in many fields of mathematics, science, engineering, and the addition of @ allows writing cleaner code:

`S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)`

instead of:

`S = dot((dot(H, beta) - r).T, dot(inv(dot(dot(H, V), H.T)), dot(H, beta) - r))`

While this operator can be overloaded to do almost anything, in `numpy`

, for example, we would use this syntax to calculate the inner and outer product of arrays and matrices:

```
>>> from numpy import array, matrix
>>> array([[1,2,3]]).T @ array([[1,2,3]])
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
>>> array([[1,2,3]]) @ array([[1,2,3]]).T
array([[14]])
>>> matrix([1,2,3]).T @ matrix([1,2,3])
matrix([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
>>> matrix([1,2,3]) @ matrix([1,2,3]).T
matrix([[14]])
```

`@=`

While researching the prior usage, we learn that there is also the inplace matrix multiplication. If we attempt to use it, we may find it is not yet implemented for numpy:

```
>>> m = matrix([1,2,3])
>>> m @= m.T
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: In-place matrix multiplication is not (yet) supported. Use "a = a @ b" instead of "a @= b".
```

When it is implemented, I would expect the result to look like this:

```
>>> m = matrix([1,2,3])
>>> m @= m.T
>>> m
matrix([[14]])
```

You said you couldn‚Äôt get the golden spiral method to work and that‚Äôs a shame because it‚Äôs really, really good. I would like to give you a complete understanding of it so that maybe you can understand how to keep this away from being ‚Äúbunched up.‚Äù

So here‚Äôs a fast, non-random way to create a lattice that is approximately correct; as discussed above, no lattice will be perfect, but this may be good enough. It is compared to other methods e.g. at BendWavy.org but it just has a nice and pretty look as well as a guarantee about even spacing in the limit.

To understand this algorithm, I first invite you to look at the 2D sunflower spiral algorithm. This is based on the fact that the most irrational number is the golden ratio `(1 + sqrt(5))/2`

and if one emits points by the approach ‚Äústand at the center, turn a golden ratio of whole turns, then emit another point in that direction,‚Äù one naturally constructs a spiral which, as you get to higher and higher numbers of points, nevertheless refuses to have well-defined ‚Äòbars‚Äô that the points line up on.^{(Note 1.)}

The algorithm for even spacing on a disk is,

```
from numpy import pi, cos, sin, sqrt, arange
import matplotlib.pyplot as pp
num_pts = 100
indices = arange(0, num_pts, dtype=float) + 0.5
r = sqrt(indices/num_pts)
theta = pi * (1 + 5**0.5) * indices
pp.scatter(r*cos(theta), r*sin(theta))
pp.show()
```

and it produces results that look like (n=100 and n=1000):

The key strange thing is the formula `r = sqrt(indices / num_pts)`

; how did I come to that one? ^{(Note 2.)}

Well, I am using the square root here because I want these to have even-area spacing around the disk. That is the same as saying that in the limit of large *N* I want a little region *R* ‚àà (*r*, *r* + d*r*), *Œò* ‚àà (*Œ∏*, *Œ∏* + d*Œ∏*) to contain a number of points proportional to its area, which is *r* d*r* d*Œ∏*. Now if we pretend that we are talking about a random variable here, this has a straightforward interpretation as saying that the joint probability density for (*R*, *Œò*) is just *c r* for some constant *c*. Normalization on the unit disk would then force *c* = 1/œÄ.

Now let me introduce a trick. It comes from probability theory where it‚Äôs known as sampling the inverse CDF: suppose you wanted to *generate* a random variable with a probability density *f*(*z*) and you have a random variable *U* ~ Uniform(0, 1), just like comes out of `random()`

in most programming languages. How do you do this?

- First, turn your density into a cumulative distribution function or CDF, which we will call
*F*(*z*). A CDF, remember, increases monotonically from 0 to 1 with derivative*f*(*z*). - Then calculate the CDF‚Äôs inverse function
*F*^{-1}(*z*). - You will find that
*Z*=*F*^{-1}(*U*) is distributed according to the target density.^{(Note 3).}

Now the golden-ratio spiral trick spaces the points out in a nicely even pattern for *Œ∏* so let‚Äôs integrate that out; for the unit disk we are left with *F*(*r*) = *r*^{2}. So the inverse function is *F*^{-1}(*u*) = *u*^{1/2}, and therefore we would generate random points on the disk in polar coordinates with `r = sqrt(random()); theta = 2 * pi * random()`

.

Now instead of *randomly* sampling this inverse function we‚Äôre *uniformly* sampling it, and the nice thing about uniform sampling is that our results about how points are spread out in the limit of large *N* will behave as if we had randomly sampled it. This combination is the trick. Instead of `random()`

we use `(arange(0, num_pts, dtype=float) + 0.5)/num_pts`

, so that, say, if we want to sample 10 points they are `r = 0.05, 0.15, 0.25, ... 0.95`

. We uniformly sample *r* to get equal-area spacing, and we use the sunflower increment to avoid awful ‚Äúbars‚Äù of points in the output.

The changes that we need to make to dot the sphere with points merely involve switching out the polar coordinates for spherical coordinates. The radial coordinate of course doesn"t enter into this because we"re on a unit sphere. To keep things a little more consistent here, even though I was trained as a physicist I"ll use mathematicians" coordinates where 0 ‚â§ *œÜ* ‚â§ œÄ is latitude coming down from the pole and 0 ‚â§ *Œ∏* ‚â§ 2œÄ is longitude. So the difference from above is that we are basically replacing the variable *r* with *œÜ*.

Our area element, which was *r* d*r* d*Œ∏*, now becomes the not-much-more-complicated sin(*œÜ*) d*œÜ* d*Œ∏*. So our joint density for uniform spacing is sin(*œÜ*)/4œÄ. Integrating out *Œ∏*, we find *f*(*œÜ*) = sin(*œÜ*)/2, thus *F*(*œÜ*) = (1 ‚àí cos(*œÜ*))/2. Inverting this we can see that a uniform random variable would look like acos(1 - 2 *u*), but we sample uniformly instead of randomly, so we instead use *œÜ*_{k} = acos(1 ‚àí 2 (*k* + 0.5)/*N*). And the rest of the algorithm is just projecting this onto the x, y, and z coordinates:

```
from numpy import pi, cos, sin, arccos, arange
import mpl_toolkits.mplot3d
import matplotlib.pyplot as pp
num_pts = 1000
indices = arange(0, num_pts, dtype=float) + 0.5
phi = arccos(1 - 2*indices/num_pts)
theta = pi * (1 + 5**0.5) * indices
x, y, z = cos(theta) * sin(phi), sin(theta) * sin(phi), cos(phi);
pp.figure().add_subplot(111, projection="3d").scatter(x, y, z);
pp.show()
```

Again for n=100 and n=1000 the results look like:

I wanted to give a shout out to Martin Roberts‚Äôs blog. Note that above I created an offset of my indices by adding 0.5 to each index. This was just visually appealing to me, but it turns out that the choice of offset matters a lot and is not constant over the interval and can mean getting as much as 8% better accuracy in packing if chosen correctly. There should also be a way to get his R_{2} sequence to cover a sphere and it would be interesting to see if this also produced a nice even covering, perhaps as-is but perhaps needing to be, say, taken from only a half of the unit square cut diagonally or so and stretched around to get a circle.

Those ‚Äúbars‚Äù are formed by rational approximations to a number, and the best rational approximations to a number come from its continued fraction expression,

`z + 1/(n_1 + 1/(n_2 + 1/(n_3 + ...)))`

where`z`

is an integer and`n_1, n_2, n_3, ...`

is either a finite or infinite sequence of positive integers:`def continued_fraction(r): while r != 0: n = floor(r) yield n r = 1/(r - n)`

Since the fraction part

`1/(...)`

is always between zero and one, a large integer in the continued fraction allows for a particularly good rational approximation: ‚Äúone divided by something between 100 and 101‚Äù is better than ‚Äúone divided by something between 1 and 2.‚Äù The most irrational number is therefore the one which is`1 + 1/(1 + 1/(1 + ...))`

and has no particularly good rational approximations; one can solve*œÜ*= 1 + 1/*œÜ*by multiplying through by*œÜ*to get the formula for the golden ratio.For folks who are not so familiar with NumPy -- all of the functions are ‚Äúvectorized,‚Äù so that

`sqrt(array)`

is the same as what other languages might write`map(sqrt, array)`

. So this is a component-by-component`sqrt`

application. The same also holds for division by a scalar or addition with scalars -- those apply to all components in parallel.The proof is simple once you know that this is the result. If you ask what"s the probability that

*z*<*Z*<*z*+ d*z*, this is the same as asking what"s the probability that*z*<*F*^{-1}(*U*) <*z*+ d*z*, apply*F*to all three expressions noting that it is a monotonically increasing function, hence*F*(*z*) <*U*<*F*(*z*+ d*z*), expand the right hand side out to find*F*(*z*) +*f*(*z*) d*z*, and since*U*is uniform this probability is just*f*(*z*) d*z*as promised.

**Let"s visualize (you gonna remember always),**

In Pandas:

- axis=0 means along "indexes". It"s a
**row-wise operation**.

Suppose, to perform concat() operation on dataframe1 & dataframe2, we will take dataframe1 & take out 1st row from dataframe1 and place into the new DF, then we take out another row from dataframe1 and put into new DF, we repeat this process until we reach to the bottom of dataframe1. Then, we do the same process for dataframe2.

Basically, stacking dataframe2 on top of dataframe1 or vice a versa.

**E.g making a pile of books on a table or floor**

- axis=1 means along "columns". It"s a
**column-wise operation.**

Suppose, to perform concat() operation on dataframe1 & dataframe2,
we will take out the 1st **complete column**(a.k.a 1st series) of dataframe1 and place into new DF, then we take out the second column of dataframe1 and keep adjacent to it **(sideways)**, we have to repeat this operation until all columns are finished. Then, we repeat the same process on dataframe2.
Basically,
**stacking dataframe2 sideways.**

**E.g arranging books on a bookshelf.**

More to it, since arrays are better representations to represent a nested n-dimensional structure compared to matrices! so below can help you more to visualize how axis plays an important role when you generalize to more than one dimension. Also, you can actually print/write/draw/visualize any n-dim array but, writing or visualizing the same in a matrix representation(3-dim) is impossible on a paper more than 3-dimensions.

You can make the observation that for a string to be considered repeating, its length must be divisible by the length of its repeated sequence. Given that, here is a solution that generates divisors of the length from `1`

to `n / 2`

inclusive, divides the original string into substrings with the length of the divisors, and tests the equality of the result set:

```
from math import sqrt, floor
def divquot(n):
if n > 1:
yield 1, n
swapped = []
for d in range(2, int(floor(sqrt(n))) + 1):
q, r = divmod(n, d)
if r == 0:
yield d, q
swapped.append((q, d))
while swapped:
yield swapped.pop()
def repeats(s):
n = len(s)
for d, q in divquot(n):
sl = s[0:d]
if sl * q == s:
return sl
return None
```

**EDIT:** In Python 3, the `/`

operator has changed to do float division by default. To get the `int`

division from Python 2, you can use the `//`

operator instead. Thank you to @TigerhawkT3 for bringing this to my attention.

The `//`

operator performs integer division in both Python 2 and Python 3, so I"ve updated the answer to support both versions. The part where we test to see if all the substrings are equal is now a short-circuiting operation using `all`

and a generator expression.

**UPDATE:** In response to a change in the original question, the code has now been updated to return the smallest repeating substring if it exists and `None`

if it does not. @godlygeek has suggested using `divmod`

to reduce the number of iterations on the `divisors`

generator, and the code has been updated to match that as well. It now returns all positive divisors of `n`

in ascending order, exclusive of `n`

itself.

**Further update for high performance:** After multiple tests, I"ve come to the conclusion that simply testing for string equality has the best performance out of any slicing or iterator solution in Python. Thus, I"ve taken a leaf out of @TigerhawkT3 "s book and updated my solution. It"s now over 6x as fast as before, noticably faster than Tigerhawk"s solution but slower than David"s.

`math.log2(x)`

```
import math
log2 = math.log(x, 2.0)
log2 = math.log2(x) # python 3.3 or later
```

- Thanks @akashchandrakar and @unutbu.

`math.frexp(x)`

If all you need is the integer part of log base 2 of a floating point number, extracting the exponent is pretty efficient:

```
log2int_slow = int(math.floor(math.log(x, 2.0))) # these give the
log2int_fast = math.frexp(x)[1] - 1 # same result
```

Python frexp() calls the C function frexp() which just grabs and tweaks the exponent.

Python frexp() returns a tuple (mantissa, exponent). So

`[1]`

gets the exponent part.For integral powers of 2 the exponent is one more than you might expect. For example 32 is stored as 0.5x2‚Å∂. This explains the

`- 1`

above. Also works for 1/32 which is stored as 0.5x2‚Åª‚Å¥.Floors toward negative infinity, so log‚ÇÇ31 computed this way is 4 not 5. log‚ÇÇ(1/17) is -5 not -4.

`x.bit_length()`

If both input and output are integers, this native integer method could be very efficient:

```
log2int_faster = x.bit_length() - 1
```

`- 1`

because 2‚Åø requires n+1 bits. Works for very large integers, e.g.`2**10000`

.Floors toward negative infinity, so log‚ÇÇ31 computed this way is 4 not 5.

Most answers suggested `round`

or `format`

. `round`

sometimes rounds up, and in my case I needed the *value* of my variable to be rounded down and not just displayed as such.

```
round(2.357, 2) # -> 2.36
```

I found the answer here: How do I round a floating point number up to a certain decimal place?

```
import math
v = 2.357
print(math.ceil(v*100)/100) # -> 2.36
print(math.floor(v*100)/100) # -> 2.35
```

or:

```
from math import floor, ceil
def roundDown(n, d=8):
d = int("1" + ("0" * d))
return floor(n * d) / d
def roundUp(n, d=8):
d = int("1" + ("0" * d))
return ceil(n * d) / d
```

Learn how data literacy is changing the world and giving you a better understanding of life's biggest problems in this "Important and Comprehensive" Guide to Statistical Thinking (New York). The bi...

28/08/2021

ig Data applications are growing very rapidly around the globe. This new approach to decision making takes into account data gathered from multiple sources. Here my goal is to show how these diverse s...

10/07/2020

For many decades, some powerful trends have been in place. Computer hardware has rap- idly been getting faster, cheaper and smaller. Internet bandwidth (that is, its information carrying capacity) has...

23/09/2020

Roger Jennings is an author and consultant specializing in Microsoft .NET n-tier database applications and data-intensive Windows Communication Foundation (WCF) Web services with SQL Server. He’s be...

10/07/2020

X
# Submit new EBook