  # Python | Numpy matrix.resize ()

NumPy | Python Methods and Functions | resize

Using the ` Numpy matrix.resize () ` method, we can resize this matrix. Remember that all elements must be covered after resizing this matrix.

Syntax: `matrix.resize(shape)`

Return: new resized matrix

Example # 1:
In this For example, we can resize this matrix using ` matrix.resize () `.

 ` # import important module into python ` ` import ` ` numpy as np ` ` `  ` # make a matrix with NumPy ` ` gfg ` ` = ` ` np.matrix (` ` `[64, 1; 12, 3]` ` `) `   ` # using the matrix.resize () method ` ` geeks ` ` = ` ` gfg.resize ((` ` 1 ` `, ` ` 4 ` `)) `   ` print ` ` (geeks) `

Output:

` [[64 1 12 3]] `

Example # 2:

 ` # import important module in python ` ` import ` ` numpy as np `   ` # make a matrix with NumPy ` ` gfg ` ` = ` ` np.matrix (` `` [1, 2; 4, 5; 7, 8] `` `) ` ` `  ` # applying the matrix.resize () method ` ` geeks ` ` = ` ` gfg.resize ((` ` 2 ` `, ` ` 3 ` `)) `   ` print ` ` (geeks) `

Exit:

` [[1 2 4] [5 7 8]] `

## How do I resize an image using PIL and maintain its aspect ratio?

### Question by saturdayplace

Is there an obvious way to do this that I"m missing? I"m just trying to make thumbnails.

## How to resize an image with OpenCV2.0 and Python2.6

I want to use OpenCV2.0 and Python2.6 to show resized images. I used and adopted this example but unfortunately, this code is for OpenCV2.1 and does not seem to be working on 2.0. Here my code:

``````import os, glob
import cv

ulpath = "exampleshq/"

for infile in glob.glob( os.path.join(ulpath, "*.jpg") ):
thumbnail = cv.CreateMat(im.rows/10, im.cols/10, cv.CV_8UC3)
cv.Resize(im, thumbnail)
cv.NamedWindow(infile)
cv.ShowImage(infile, thumbnail)
cv.WaitKey(0)
cv.DestroyWindow(name)
``````

Since I cannot use

``````cv.LoadImageM
``````

I used

``````cv.LoadImage
``````

instead, which was no problem in other applications. Nevertheless, cv.iplimage has no attribute rows, cols or size. Can anyone give me a hint, how to solve this problem?

## Numpy Resize/Rescale Image

I would like to take an image and change the scale of the image, while it is a numpy array.

For example I have this image of a coca-cola bottle: bottle-1

Which translates to a numpy array of shape `(528, 203, 3)` and I want to resize that to say the size of this second image: bottle-2

Which has a shape of `(140, 54, 3)`.

How do I change the size of the image to a certain shape while still maintaining the original image? Other answers suggest stripping every other or third row out, but what I want to do is basically shrink the image how you would via an image editor but in python code. Are there any libraries to do this in numpy/SciPy?

## resize ipython notebook output window

By default the ipython notebook ouput is limited to a small sub window at the bottom. This makes us force to use separate scroll bar that comes with the output window, when the output is big.

Any configuration option to make it not limited in size, instead run as high as the actual output is? Or option to resize it once it gets created?

## Resize fields in Django Admin

Django tends to fill up horizontal space when adding or editing entries on the admin, but, in some cases, is a real waste of space, when, i.e., editing a date field, 8 characters wide, or a CharField, also 6 or 8 chars wide, and then the edit box goes up to 15 or 20 chars.

How can I tell the admin how wide a textbox should be, or the height of a TextField edit box?

There are several ways to post an image in Jupyter notebooks:

## via HTML:

``````from IPython.display import Image
from IPython.core.display import HTML
Image(url= "http://my_site.com/my_picture.jpg")
``````

You retain the ability to use HTML tags to resize, etc...

``````Image(url= "http://my_site.com/my_picture.jpg", width=100, height=100)
``````

You can also display images stored locally, either via relative or absolute path.

``````PATH = "/Users/reblochonMasque/Documents/Drawings/"
Image(filename = PATH + "My_picture.jpg", width=100, height=100)
``````

if the image it wider than the display settings: thanks

use `unconfined=True` to disable max-width confinement of the image

``````from IPython.core.display import Image, display
display(Image(url="https://i.ytimg.com/vi/j22DmsZEv30/maxresdefault.jpg", width=1900, unconfined=True))
``````

## or via markdown:

• make sure the cell is a markdown cell, and not a code cell, thanks @Ê∏∏ÂáØË∂Ö in the comments)
• Please note that on some systems, the markdown does not allow white space in the filenames. Thanks to @CoffeeTableEspresso and @zebralamy in the comments)
(On macos, as long as you are on a markdown cell you would do like this: `![title](../image 1.png)`, and not worry about the white space).

for a web image:

``````![Image of Yaktocat](https://octodex.github.com/images/yaktocat.png)
``````

as shown by @cristianmtr Paying attention not to use either these quotes `""` or those `""` around the url.

or a local one:

``````![title](img/picture.png)
``````

demonstrated by @Sebastian

Yes, play with `figuresize` and `dpi` like so (before you call your subplot):

``````fig=plt.figure(figsize=(12,8), dpi= 100, facecolor="w", edgecolor="k")
``````

As @tacaswell and @Hagne pointed out, you can also change the defaults if it"s not a one-off:

``````plt.rcParams["figure.figsize"] = [12, 8]
plt.rcParams["figure.dpi"] = 100 # 200 e.g. is really fine, but slower
``````

Yeah, you can install `opencv` (this is a library used for image processing, and computer vision), and use the `cv2.resize` function. And for instance use:

``````import cv2
import numpy as np

res = cv2.resize(img, dsize=(54, 140), interpolation=cv2.INTER_CUBIC)``````

Here `img` is thus a numpy array containing the original image, whereas `res` is a numpy array containing the resized image. An important aspect is the `interpolation` parameter: there are several ways how to resize an image. Especially since you scale down the image, and the size of the original image is not a multiple of the size of the resized image. Possible interpolation schemas are:

• `INTER_NEAREST` - a nearest-neighbor interpolation
• `INTER_LINEAR` - a bilinear interpolation (used by default)
• `INTER_AREA` - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire‚Äô-free results. But when the image is zoomed, it is similar to the `INTER_NEAREST` method.
• `INTER_CUBIC` - a bicubic interpolation over 4x4 pixel neighborhood
• `INTER_LANCZOS4` - a Lanczos interpolation over 8x8 pixel neighborhood

Like with most options, there is no "best" option in the sense that for every resize schema, there are scenarios where one strategy can be preferred over another.

How are Python"s Built In Dictionaries Implemented?

Here"s the short course:

• They are hash tables. (See below for the specifics of Python"s implementation.)
• A new layout and algorithm, as of Python 3.6, makes them
• ordered by key insertion, and
• take up less space,
• at virtually no cost in performance.
• Another optimization saves space when dicts share keys (in special cases).

The ordered aspect is unofficial as of Python 3.6 (to give other implementations a chance to keep up), but official in Python 3.7.

## Python"s Dictionaries are Hash Tables

For a long time, it worked exactly like this. Python would preallocate 8 empty rows and use the hash to determine where to stick the key-value pair. For example, if the hash for the key ended in 001, it would stick it in the 1 (i.e. 2nd) index (like the example below.)

``````   <hash>       <key>    <value>
null        null    null
...010001    ffeb678c    633241c4 # addresses of the keys and values
null        null    null
...         ...    ...
``````

Each row takes up 24 bytes on a 64 bit architecture, 12 on a 32 bit. (Note that the column headers are just labels for our purposes here - they don"t actually exist in memory.)

If the hash ended the same as a preexisting key"s hash, this is a collision, and then it would stick the key-value pair in a different location.

After 5 key-values are stored, when adding another key-value pair, the probability of hash collisions is too large, so the dictionary is doubled in size. In a 64 bit process, before the resize, we have 72 bytes empty, and after, we are wasting 240 bytes due to the 10 empty rows.

This takes a lot of space, but the lookup time is fairly constant. The key comparison algorithm is to compute the hash, go to the expected location, compare the key"s id - if they"re the same object, they"re equal. If not then compare the hash values, if they are not the same, they"re not equal. Else, then we finally compare keys for equality, and if they are equal, return the value. The final comparison for equality can be quite slow, but the earlier checks usually shortcut the final comparison, making the lookups very quick.

Collisions slow things down, and an attacker could theoretically use hash collisions to perform a denial of service attack, so we randomized the initialization of the hash function such that it computes different hashes for each new Python process.

The wasted space described above has led us to modify the implementation of dictionaries, with an exciting new feature that dictionaries are now ordered by insertion.

## The New Compact Hash Tables

We start, instead, by preallocating an array for the index of the insertion.

Since our first key-value pair goes in the second slot, we index like this:

``````[null, 0, null, null, null, null, null, null]
``````

And our table just gets populated by insertion order:

``````   <hash>       <key>    <value>
...010001    ffeb678c    633241c4
...         ...    ...
``````

So when we do a lookup for a key, we use the hash to check the position we expect (in this case, we go straight to index 1 of the array), then go to that index in the hash-table (e.g. index 0), check that the keys are equal (using the same algorithm described earlier), and if so, return the value.

We retain constant lookup time, with minor speed losses in some cases and gains in others, with the upsides that we save quite a lot of space over the pre-existing implementation and we retain insertion order. The only space wasted are the null bytes in the index array.

Raymond Hettinger introduced this on python-dev in December of 2012. It finally got into CPython in Python 3.6. Ordering by insertion was considered an implementation detail for 3.6 to allow other implementations of Python a chance to catch up.

## Shared Keys

Another optimization to save space is an implementation that shares keys. Thus, instead of having redundant dictionaries that take up all of that space, we have dictionaries that reuse the shared keys and keys" hashes. You can think of it like this:

``````     hash         key    dict_0    dict_1    dict_2...
...         ...    ...       ...       ...
``````

For a 64 bit machine, this could save up to 16 bytes per key per extra dictionary.

## Shared Keys for Custom Objects & Alternatives

These shared-key dicts are intended to be used for custom objects" `__dict__`. To get this behavior, I believe you need to finish populating your `__dict__` before you instantiate your next object (see PEP 412). This means you should assign all your attributes in the `__init__` or `__new__`, else you might not get your space savings.

However, if you know all of your attributes at the time your `__init__` is executed, you could also provide `__slots__` for your object, and guarantee that `__dict__` is not created at all (if not available in parents), or even allow `__dict__` but guarantee that your foreseen attributes are stored in slots anyways. For more on `__slots__`, see my answer here.

I would try this:

``````import numpy as np
import PIL
from PIL import Image

list_im = ["Test1.jpg", "Test2.jpg", "Test3.jpg"]
imgs    = [ PIL.Image.open(i) for i in list_im ]
# pick the image which is the smallest, and resize the others to match it (can be arbitrary image shape here)
min_shape = sorted( [(np.sum(i.size), i.size ) for i in imgs])
imgs_comb = np.hstack( (np.asarray( i.resize(min_shape) ) for i in imgs ) )

# save that beautiful picture
imgs_comb = PIL.Image.fromarray( imgs_comb)
imgs_comb.save( "Trifecta.jpg" )

# for a vertical stacking it is simple: use vstack
imgs_comb = np.vstack( (np.asarray( i.resize(min_shape) ) for i in imgs ) )
imgs_comb = PIL.Image.fromarray( imgs_comb)
imgs_comb.save( "Trifecta_vertical.jpg" )
``````

It should work as long as all images are of the same variety (all RGB, all RGBA, or all grayscale). It shouldn"t be difficult to ensure this is the case with a few more lines of code. Here are my example images, and the result:

# Test1.jpg # Test2.jpg # Test3.jpg # Trifecta.jpg: # Trifecta_vertical.jpg While it might be possible to use numpy alone to do this, the operation is not built-in. That said, you can use `scikit-image` (which is built on numpy) to do this kind of image manipulation.

Scikit-Image rescaling documentation is here.

For example, you could do the following with your image:

``````from skimage.transform import resize
bottle_resized = resize(bottle, (140, 54))
``````

This will take care of things like interpolation, anti-aliasing, etc. for you.

`[*a]` is internally doing the C equivalent of:

1. Make a new, empty `list`
2. Call `newlist.extend(a)`
3. Returns `list`.

So if you expand your test to:

``````from sys import getsizeof

for n in range(13):
a = [None] * n
l = []
l.extend(a)
print(n, getsizeof(list(a)),
getsizeof([x for x in a]),
getsizeof([*a]),
getsizeof(l))
``````

Try it online!

you"ll see the results for `getsizeof([*a])` and `l = []; l.extend(a); getsizeof(l)` are the same.

This is usually the right thing to do; when `extend`ing you"re usually expecting to add more later, and similarly for generalized unpacking, it"s assumed that multiple things will be added one after the other. `[*a]` is not the normal case; Python assumes there are multiple items or iterables being added to the `list` (`[*a, b, c, *d]`), so overallocation saves work in the common case.

By contrast, a `list` constructed from a single, presized iterable (with `list()`) may not grow or shrink during use, and overallocating is premature until proven otherwise; Python recently fixed a bug that made the constructor overallocate even for inputs with known size.

As for `list` comprehensions, they"re effectively equivalent to repeated `append`s, so you"re seeing the final result of the normal overallocation growth pattern when adding an element at a time.

To be clear, none of this is a language guarantee. It"s just how CPython implements it. The Python language spec is generally unconcerned with specific growth patterns in `list` (aside from guaranteeing amortized `O(1)` `append`s and `pop`s from the end). As noted in the comments, the specific implementation changes again in 3.9; while it won"t affect `[*a]`, it could affect other cases where what used to be "build a temporary `tuple` of individual items and then `extend` with the `tuple`" now becomes multiple applications of `LIST_APPEND`, which can change when the overallocation occurs and what numbers go into the calculation.

## Example doubling the image size

There are two ways to resize an image. The new size can be specified:

1. Manually;

`height, width = src.shape[:2]`

`dst = cv2.resize(src, (2*width, 2*height), interpolation = cv2.INTER_CUBIC)`

2. By a scaling factor.

`dst = cv2.resize(src, None, fx = 2, fy = 2, interpolation = cv2.INTER_CUBIC)`, where fx is the scaling factor along the horizontal axis and fy along the vertical axis.

To shrink an image, it will generally look best with INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with INTER_CUBIC (slow) or INTER_LINEAR (faster but still looks OK).

## Example shrink image to fit a max height/width (keeping aspect ratio)

``````import cv2

height, width = img.shape[:2]
max_height = 300
max_width = 300

# only shrink if img is bigger than required
if max_height < height or max_width < width:
# get scaling factor
scaling_factor = max_height / float(height)
if max_width/float(width) < scaling_factor:
scaling_factor = max_width / float(width)
# resize image
img = cv2.resize(img, None, fx=scaling_factor, fy=scaling_factor, interpolation=cv2.INTER_AREA)

cv2.imshow("Shrinked image", img)
key = cv2.waitKey()
``````

## Using your code with cv2

``````import cv2 as cv

height, width = im.shape[:2]

thumbnail = cv.resize(im, (round(width / 10), round(height / 10)), interpolation=cv.INTER_AREA)

cv.imshow("exampleshq", thumbnail)
cv.waitKey(0)
cv.destroyAllWindows()
``````

If you only have one reference to a string and you concatenate another string to the end, CPython now special cases this and tries to extend the string in place.

The end result is that the operation is amortized O(n).

e.g.

``````s = ""
for i in range(n):
s+=str(i)
``````

used to be O(n^2), but now it is O(n).

From the source (bytesobject.c):

``````void
PyBytes_ConcatAndDel(register PyObject **pv, register PyObject *w)
{
PyBytes_Concat(pv, w);
Py_XDECREF(w);
}

/* The following function breaks the notion that strings are immutable:
it changes the size of a string.  We get away with this only if there
is only one module referencing the object.  You can also think of it
as creating a new string object and destroying the old one, only
more efficiently.  In any case, don"t use this if the string may
already be known to some other part of the code...
Note that if there"s not enough memory to resize the string, the original
string object at *pv is deallocated, *pv is set to NULL, an "out of
memory" exception is set, and -1 is returned.  Else (on success) 0 is
returned, and the value in *pv may or may not be the same as on input.
As always, an extra byte is allocated for a trailing  byte (newsize
does *not* include that), and a trailing  byte is stored.
*/

int
_PyBytes_Resize(PyObject **pv, Py_ssize_t newsize)
{
register PyObject *v;
register PyBytesObject *sv;
v = *pv;
if (!PyBytes_Check(v) || Py_REFCNT(v) != 1 || newsize < 0) {
*pv = 0;
Py_DECREF(v);
return -1;
}
/* XXX UNREF/NEWREF interface should be more symmetrical */
_Py_DEC_REFTOTAL;
_Py_ForgetReference(v);
*pv = (PyObject *)
PyObject_REALLOC((char *)v, PyBytesObject_SIZE + newsize);
if (*pv == NULL) {
PyObject_Del(v);
PyErr_NoMemory();
return -1;
}
_Py_NewReference(*pv);
sv = (PyBytesObject *) *pv;
Py_SIZE(sv) = newsize;
sv->ob_sval[newsize] = "";
sv->ob_shash = -1;          /* invalidate cached hash value */
return 0;
}
``````

It"s easy enough to verify empirically.

```\$ python -m timeit -s"s=""" "for i in xrange(10):s+="a""
1000000 loops, best of 3: 1.85 usec per loop
\$ python -m timeit -s"s=""" "for i in xrange(100):s+="a""
10000 loops, best of 3: 16.8 usec per loop
\$ python -m timeit -s"s=""" "for i in xrange(1000):s+="a""
10000 loops, best of 3: 158 usec per loop
\$ python -m timeit -s"s=""" "for i in xrange(10000):s+="a""
1000 loops, best of 3: 1.71 msec per loop
\$ python -m timeit -s"s=""" "for i in xrange(100000):s+="a""
10 loops, best of 3: 14.6 msec per loop
\$ python -m timeit -s"s=""" "for i in xrange(1000000):s+="a""
10 loops, best of 3: 173 msec per loop
```

It"s important however to note that this optimisation isn"t part of the Python spec. It"s only in the cPython implementation as far as I know. The same empirical testing on pypy or jython for example might show the older O(n**2) performance .

```\$ pypy -m timeit -s"s=""" "for i in xrange(10):s+="a""
10000 loops, best of 3: 90.8 usec per loop
\$ pypy -m timeit -s"s=""" "for i in xrange(100):s+="a""
1000 loops, best of 3: 896 usec per loop
\$ pypy -m timeit -s"s=""" "for i in xrange(1000):s+="a""
100 loops, best of 3: 9.03 msec per loop
\$ pypy -m timeit -s"s=""" "for i in xrange(10000):s+="a""
10 loops, best of 3: 89.5 msec per loop
```

So far so good, but then,

```\$ pypy -m timeit -s"s=""" "for i in xrange(100000):s+="a""
10 loops, best of 3: 12.8 sec per loop
```

ouch even worse than quadratic. So pypy is doing something that works well with short strings, but performs poorly for larger strings.

Here is everything about Python dicts that I was able to put together (probably more than anyone would like to know; but the answer is comprehensive).

• Python dictionaries are implemented as hash tables.

• Hash tables must allow for hash collisions i.e. even if two distinct keys have the same hash value, the table"s implementation must have a strategy to insert and retrieve the key and value pairs unambiguously.

• Python `dict` uses open addressing to resolve hash collisions (explained below) (see dictobject.c:296-297).

• Python hash table is just a contiguous block of memory (sort of like an array, so you can do an `O(1)` lookup by index).

• Each slot in the table can store one and only one entry. This is important.

• Each entry in the table is actually a combination of the three values: < hash, key, value >. This is implemented as a C struct (see dictobject.h:51-56).

• The figure below is a logical representation of a Python hash table. In the figure below, `0, 1, ..., i, ...` on the left are indices of the slots in the hash table (they are just for illustrative purposes and are not stored along with the table obviously!).

``````  # Logical model of Python Hash table
-+-----------------+
0| <hash|key|value>|
-+-----------------+
1|      ...        |
-+-----------------+
.|      ...        |
-+-----------------+
i|      ...        |
-+-----------------+
.|      ...        |
-+-----------------+
n|      ...        |
-+-----------------+
``````
• When a new dict is initialized it starts with 8 slots. (see dictobject.h:49)

• When adding entries to the table, we start with some slot, `i`, that is based on the hash of the key. CPython initially uses `i = hash(key) & mask` (where `mask = PyDictMINSIZE - 1`, but that"s not really important). Just note that the initial slot, `i`, that is checked depends on the hash of the key.

• If that slot is empty, the entry is added to the slot (by entry, I mean, `<hash|key|value>`). But what if that slot is occupied!? Most likely because another entry has the same hash (hash collision!)

• If the slot is occupied, CPython (and even PyPy) compares the hash AND the key (by compare I mean `==` comparison not the `is` comparison) of the entry in the slot against the hash and key of the current entry to be inserted (dictobject.c:337,344-345) respectively. If both match, then it thinks the entry already exists, gives up and moves on to the next entry to be inserted. If either hash or the key don"t match, it starts probing.

• Probing just means it searches the slots by slot to find an empty slot. Technically we could just go one by one, `i+1, i+2, ...` and use the first available one (that"s linear probing). But for reasons explained beautifully in the comments (see dictobject.c:33-126), CPython uses random probing. In random probing, the next slot is picked in a pseudo random order. The entry is added to the first empty slot. For this discussion, the actual algorithm used to pick the next slot is not really important (see dictobject.c:33-126 for the algorithm for probing). What is important is that the slots are probed until first empty slot is found.

• The same thing happens for lookups, just starts with the initial slot i (where i depends on the hash of the key). If the hash and the key both don"t match the entry in the slot, it starts probing, until it finds a slot with a match. If all slots are exhausted, it reports a fail.

• BTW, the `dict` will be resized if it is two-thirds full. This avoids slowing down lookups. (see dictobject.h:64-65)

NOTE: I did the research on Python Dict implementation in response to my own question about how multiple entries in a dict can have same hash values. I posted a slightly edited version of the response here because all the research is very relevant for this question as well.