Python | Pandas Series.ptp ()

NumPy | ptp | Python Methods and Functions

Series.ptp() Pandas Series.ptp() returns the difference between the maximum value and
the minimum value in the object. This is the equivalent of the numpy.ndarray ptp method.

Syntax: Series.ptp (axis = None, skipna = None, level = None, numeric_only = None, ** kwargs)

axis: Axis for the function to be applied on.
skipna: Exclude NA / null values ​​when computing the result.
level: If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar.
numeric_only: Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
** kwargs: Additional keyword arguments to be passed to the function.

Returns: ptp: scalar or Series (if level specified)

Example # 1: Use Series.ptp () to return the difference between the maximum and minimum baseline values data in this Series object.

# import pandas as pd

import pandas as pd

# Create a series

sr = pd. Series ([ 10 , 25 , 3 , 11 , 24 , 6 ])

# Create index

index_ = [ ` Coca Cola` , `Sprite` , ` Coke` , `Fanta` , ` Dew` , `ThumbsUp` ]

# set index

sr.index = index_

# Print series

print (sr)


Now we will use Series.ptp () to find the difference between the maximum and minimum values ​​in a given series object.

# return the difference between
# maximum and minimum value

result = sr.ptp ()

# Print the result

print (result)


As we can see in the output, Series.ptp () successfully returned the difference between the maximum and minimum values ​​of the underlying data in this Series object.

Example # 2: Use Series.ptp () to return the difference between the maximum and minimum values ​​of the underlying data in a given Series object .


Now we will use Series.ptp () to find the difference between maxim the minimum and minimum value in this series object.

# import pandas as pd

import pandas as pd

# Create series

sr = pd.Series ([ 11 , 21 , 8 , 18 , 65 , 84 , 32 , 10 , 5 , 24 , 32 ])

# Print series

print (sr)

# return the difference between
# maximum and minimum value

result = sr.ptp ()

# Print result

print (result)


As we can see in the output, Series.ptp () successfully returned the difference between the maximum and minimum values ​​of the underlying data in the given series object.

Example # 3: Use Series.ptp () to return the difference between the maximum and minimum values ​​of the underlying data in this Series object. This series object contains some missing values.

# import pandas as pd

import pandas as pd

# Create series

sr = pd.Series ([ 19.5 , 16.8 , None , 22.78 , None , 20.124 , None , 18.1002 , None ])

# Print series

print (sr)


We will now use Series.ptp () to find the difference between the maximum and the minimum value in the given object of the series. we`re going to skip missing values ​​in the calculation.

# return the difference between
# maximum and minimum value

result = sr.ptp ( skipna = True )

# Print result

print (result)


As we see on output, Series.ptp () successfully returned the difference between the maximum and minimum values ​​of the underlying data in the given series object. < / p>

Python | Pandas Series.ptp (): StackOverflow Questions

Why use argparse rather than optparse?

I noticed that the Python 2.7 documentation includes yet another command-line parsing module. In addition to getopt and optparse we now have argparse.

Why has yet another command-line parsing module been created? Why should I use it instead of optparse? Are there new features that I should know about?

Answer #1

If the array contains both positive and negative data, I"d go with:

import numpy as np

a = np.random.rand(3,2)

# Normalised [0,1]
b = (a - np.min(a))/np.ptp(a)

# Normalised [0,255] as integer: don"t forget the parenthesis before astype(int)
c = (255*(a - np.min(a))/np.ptp(a)).astype(int)        

# Normalised [-1,1]
d = 2.*(a - np.min(a))/np.ptp(a)-1

If the array contains nan, one solution could be to just remove them as:

def nan_ptp(a):
    return np.ptp(a[np.isfinite(a)])

b = (a - np.nanmin(a))/nan_ptp(a)

However, depending on the context you might want to treat nan differently. E.g. interpolate the value, replacing in with e.g. 0, or raise an error.

Finally, worth mentioning even if it"s not OP"s question, standardization:

e = (a - np.mean(a)) / np.std(a)

Answer #2

The python keyring library integrates with the CryptProtectData API on Windows (along with relevant API"s on Mac and Linux) which encrypts data with the user"s logon credentials.

Simple usage:

import keyring

# the service is just a namespace for your app
service_id = "IM_YOUR_APP!"

keyring.set_password(service_id, "dustin", "my secret password")
password = keyring.get_password(service_id, "dustin") # retrieve password

Usage if you want to store the username on the keyring:

import keyring

MAGIC_USERNAME_KEY = "im_the_magic_username_key"

# the service is just a namespace for your app
service_id = "IM_YOUR_APP!"  

username = "dustin"

# save password
keyring.set_password(service_id, username, "password")

# optionally, abuse `set_password` to save username onto keyring
# we"re just using some known magic string in the username field
keyring.set_password(service_id, MAGIC_USERNAME_KEY, username)

Later to get your info from the keyring

# again, abusing `get_password` to get the username.
# after all, the keyring is just a key-value store
username = keyring.get_password(service_id, MAGIC_USERNAME_KEY)
password = keyring.get_password(service_id, username)  

Items are encrypted with the user"s operating system credentials, thus other applications running in your user account would be able to access the password.

To obscure that vulnerability a bit you could encrypt/obfuscate the password in some manner before storing it on the keyring. Of course, anyone who was targeting your script would just be able to look at the source and figure out how to unencrypt/unobfuscate the password, but you"d at least prevent some application vacuuming up all passwords in the vault and getting yours as well.

Answer #3

To read user input you can try the cmd module for easily creating a mini-command line interpreter (with help texts and autocompletion) and raw_input (input for Python 3+) for reading a line of text from the user.

text = raw_input("prompt")  # Python 2
text = input("prompt")  # Python 3

Command line inputs are in sys.argv. Try this in your script:

import sys
print (sys.argv)

There are two modules for parsing command line options: optparse (deprecated since Python 2.7, use argparse instead) and getopt. If you just want to input files to your script, behold the power of fileinput.

The Python library reference is your friend.

Answer #4

An example (listing the methods of the optparse.OptionParser class):

>>> from optparse import OptionParser
>>> import inspect
>>> inspect.getmembers(OptionParser, predicate=inspect.ismethod)
[([("__init__", <unbound method OptionParser.__init__>),
 ("add_option", <unbound method OptionParser.add_option>),
 ("add_option_group", <unbound method OptionParser.add_option_group>),
 ("add_options", <unbound method OptionParser.add_options>),
 ("check_values", <unbound method OptionParser.check_values>),
 ("destroy", <unbound method OptionParser.destroy>),
  <unbound method OptionParser.disable_interspersed_args>),
  <unbound method OptionParser.enable_interspersed_args>),
 ("error", <unbound method OptionParser.error>),
 ("exit", <unbound method OptionParser.exit>),
 ("expand_prog_name", <unbound method OptionParser.expand_prog_name>),
# python3
>>> inspect.getmembers(OptionParser, predicate=inspect.isfunction)

Notice that getmembers returns a list of 2-tuples. The first item is the name of the member, the second item is the value.

You can also pass an instance to getmembers:

>>> parser = OptionParser()
>>> inspect.getmembers(parser, predicate=inspect.ismethod)

Answer #5

If you"re just wanting (semi) contiguous regions, there"s already an easy implementation in Python: SciPy"s ndimage.morphology module. This is a fairly common image morphology operation.

Basically, you have 5 steps:

def find_paws(data, smooth_radius=5, threshold=0.0001):
    data = sp.ndimage.uniform_filter(data, smooth_radius)
    thresh = data > threshold
    filled = sp.ndimage.morphology.binary_fill_holes(thresh)
    coded_paws, num_paws = sp.ndimage.label(filled)
    data_slices = sp.ndimage.find_objects(coded_paws)
    return object_slices
  1. Blur the input data a bit to make sure the paws have a continuous footprint. (It would be more efficient to just use a larger kernel (the structure kwarg to the various scipy.ndimage.morphology functions) but this isn"t quite working properly for some reason...)

  2. Threshold the array so that you have a boolean array of places where the pressure is over some threshold value (i.e. thresh = data > value)

  3. Fill any internal holes, so that you have cleaner regions (filled = sp.ndimage.morphology.binary_fill_holes(thresh))

  4. Find the separate contiguous regions (coded_paws, num_paws = sp.ndimage.label(filled)). This returns an array with the regions coded by number (each region is a contiguous area of a unique integer (1 up to the number of paws) with zeros everywhere else)).

  5. Isolate the contiguous regions using data_slices = sp.ndimage.find_objects(coded_paws). This returns a list of tuples of slice objects, so you could get the region of the data for each paw with [data[x] for x in data_slices]. Instead, we"ll draw a rectangle based on these slices, which takes slightly more work.

The two animations below show your "Overlapping Paws" and "Grouped Paws" example data. This method seems to be working perfectly. (And for whatever it"s worth, this runs much more smoothly than the GIF images below on my machine, so the paw detection algorithm is fairly fast...)

Overlapping Paws Grouped Paws

Here"s a full example (now with much more detailed explanations). The vast majority of this is reading the input and making an animation. The actual paw detection is only 5 lines of code.

import numpy as np
import scipy as sp
import scipy.ndimage

import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle

def animate(input_filename):
    """Detects paws and animates the position and raw data of each frame
    in the input file"""
    # With matplotlib, it"s much, much faster to just update the properties
    # of a display object than it is to create a new one, so we"ll just update
    # the data and position of the same objects throughout this animation...

    infile = paw_file(input_filename)

    # Since we"re making an animation with matplotlib, we need 
    # ion() instead of show()...
    fig = plt.figure()
    ax = fig.add_subplot(111)

    # Make an image based on the first frame that we"ll update later
    # (The first frame is never actually displayed)
    im = ax.imshow([1])

    # Make 4 rectangles that we can later move to the position of each paw
    rects = [Rectangle((0,0), 1,1, fc="none", ec="red") for i in range(4)]
    [ax.add_patch(rect) for rect in rects]

    title = ax.set_title("Time 0.0 ms")

    # Process and display each frame
    for time, frame in infile:
        paw_slices = find_paws(frame)

        # Hide any rectangles that might be visible
        [rect.set_visible(False) for rect in rects]

        # Set the position and size of a rectangle for each paw and display it
        for slice, rect in zip(paw_slices, rects):
            dy, dx = slice
            rect.set_xy((dx.start, dy.start))
            rect.set_width(dx.stop - dx.start + 1)
            rect.set_height(dy.stop - dy.start + 1)

        # Update the image data and title of the plot
        title.set_text("Time %0.2f ms" % time)
        im.set_clim([frame.min(), frame.max()])

def find_paws(data, smooth_radius=5, threshold=0.0001):
    """Detects and isolates contiguous regions in the input array"""
    # Blur the input data a bit so the paws have a continous footprint 
    data = sp.ndimage.uniform_filter(data, smooth_radius)
    # Threshold the blurred data (this needs to be a bit > 0 due to the blur)
    thresh = data > threshold
    # Fill any interior holes in the paws to get cleaner regions...
    filled = sp.ndimage.morphology.binary_fill_holes(thresh)
    # Label each contiguous paw
    coded_paws, num_paws = sp.ndimage.label(filled)
    # Isolate the extent of each paw
    data_slices = sp.ndimage.find_objects(coded_paws)
    return data_slices

def paw_file(filename):
    """Returns a iterator that yields the time and data in each frame
    The infile is an ascii file of timesteps formatted similar to this:

    Frame 0 (0.00 ms)
    0.0 0.0 0.0
    0.0 0.0 0.0

    Frame 1 (0.53 ms)
    0.0 0.0 0.0
    0.0 0.0 0.0
    with open(filename) as infile:
        while True:
                time, data = read_frame(infile)
                yield time, data
            except StopIteration:

def read_frame(infile):
    """Reads a frame from the infile."""
    frame_header =
    time = float(frame_header[-2][1:])
    data = []
    while True:
        line =
        if line == []:
    return time, np.array(data, dtype=np.float)

if __name__ == "__main__":
    animate("Overlapping paws.bin")
    animate("Grouped up paws.bin")
    animate("Normal measurement.bin")

Update: As far as identifying which paw is in contact with the sensor at what times, the simplest solution is to just do the same analysis, but use all of the data at once. (i.e. stack the input into a 3D array, and work with it, instead of the individual time frames.) Because SciPy"s ndimage functions are meant to work with n-dimensional arrays, we don"t have to modify the original paw-finding function at all.

# This uses functions (and imports) in the previous code example!!
def paw_regions(infile):
    # Read in and stack all data together into a 3D array
    data, time = [], []
    for t, frame in paw_file(infile):
    data = np.dstack(data)
    time = np.asarray(time)

    # Find and label the paw impacts
    data_slices, coded_paws = find_paws(data, smooth_radius=4)

    # Sort by time of initial paw impact... This way we can determine which
    # paws are which relative to the first paw with a simple modulo 4.
    # (Assuming a 4-legged dog, where all 4 paws contacted the sensor)
    data_slices.sort(key=lambda dat_slice: dat_slice[2].start)

    # Plot up a simple analysis
    fig = plt.figure()
    ax1 = fig.add_subplot(2,1,1)
    annotate_paw_prints(time, data, data_slices, ax=ax1)
    ax2 = fig.add_subplot(2,1,2)
    plot_paw_impacts(time, data_slices, ax=ax2)

def plot_paw_impacts(time, data_slices, ax=None):
    if ax is None:
        ax = plt.gca()

    # Group impacts by paw...
    for i, dat_slice in enumerate(data_slices):
        dx, dy, dt = dat_slice
        paw = i%4 + 1
        # Draw a bar over the time interval where each paw is in contact
        ax.barh(bottom=paw, width=time[dt].ptp(), height=0.2, 
                left=time[dt].min(), align="center", color="red")
    ax.set_yticks(range(1, 5))
    ax.set_yticklabels(["Paw 1", "Paw 2", "Paw 3", "Paw 4"])
    ax.set_xlabel("Time (ms) Since Beginning of Experiment")
    ax.set_title("Periods of Paw Contact")

def annotate_paw_prints(time, data, data_slices, ax=None):
    if ax is None:
        ax = plt.gca()

    # Display all paw impacts (sum over time)

    # Annotate each impact with which paw it is
    # (Relative to the first paw to hit the sensor)
    x, y = [], []
    for i, region in enumerate(data_slices):
        dx, dy, dz = region
        # Get x,y center of slice...
        x0 = 0.5 * (dx.start + dx.stop)
        y0 = 0.5 * (dy.start + dy.stop)
        x.append(x0); y.append(y0)

        # Annotate the paw impacts         
        ax.annotate("Paw %i" % (i%4 +1), (x0, y0),  
            color="red", ha="center", va="bottom")

    # Plot line connecting paw impacts
    ax.plot(x,y, "-wo")
    ax.set_title("Order of Steps")

alt text

alt text

alt text

Answer #6

As of python 2.7, optparse is deprecated, and will hopefully go away in the future.

argparse is better for all the reasons listed on its original page (

  • handling positional arguments
  • supporting sub-commands
  • allowing alternative option prefixes like + and /
  • handling zero-or-more and one-or-more style arguments
  • producing more informative usage messages
  • providing a much simpler interface for custom types and actions

More information is also in PEP 389, which is the vehicle by which argparse made it into the standard library.

Answer #7

This answer suggests optparse which is appropriate for older Python versions. For Python 2.7 and above, argparse replaces optparse. See this answer for more information.

As other people pointed out, you are better off going with optparse over getopt. getopt is pretty much a one-to-one mapping of the standard getopt(3) C library functions, and not very easy to use.

optparse, while being a bit more verbose, is much better structured and simpler to extend later on.

Here"s a typical line to add an option to your parser:

parser.add_option("-q", "--query",
            action="store", dest="query",
            help="query string", default="spam")

It pretty much speaks for itself; at processing time, it will accept -q or --query as options, store the argument in an attribute called query and has a default value if you don"t specify it. It is also self-documenting in that you declare the help argument (which will be used when run with -h/--help) right there with the option.

Usually you parse your arguments with:

options, args = parser.parse_args()

This will, by default, parse the standard arguments passed to the script (sys.argv[1:])

options.query will then be set to the value you passed to the script.

You create a parser simply by doing

parser = optparse.OptionParser()

These are all the basics you need. Here"s a complete Python script that shows this:

import optparse

parser = optparse.OptionParser()

parser.add_option("-q", "--query",
    action="store", dest="query",
    help="query string", default="spam")

options, args = parser.parse_args()

print "Query string:", options.query

5 lines of python that show you the basics.

Save it in, and run it once with


and once with

python --query myquery

Beyond that, you will find that optparse is very easy to extend. In one of my projects, I created a Command class which allows you to nest subcommands in a command tree easily. It uses optparse heavily to chain commands together. It"s not something I can easily explain in a few lines, but feel free to browse around in my repository for the main class, as well as a class that uses it and the option parser

Get Solution for free from DataCamp guru