👻 See our latest reviews to choose the best laptop for Machine Learning and Deep learning tasks!
I have a Python pandas DataFrame rpt
:
rpt
<class "pandas.core.frame.DataFrame">
MultiIndex: 47518 entries, ("000002", "20120331") to ("603366", "20091231")
Data columns:
STK_ID 47518 non-null values
STK_Name 47518 non-null values
RPT_Date 47518 non-null values
sales 47518 non-null values
I can filter the rows whose stock id is "600809"
like this: rpt[rpt["STK_ID"] == "600809"]
<class "pandas.core.frame.DataFrame">
MultiIndex: 25 entries, ("600809", "20120331") to ("600809", "20060331")
Data columns:
STK_ID 25 non-null values
STK_Name 25 non-null values
RPT_Date 25 non-null values
sales 25 non-null values
and I want to get all the rows of some stocks together, such as ["600809","600141","600329"]
. That means I want a syntax like this:
stk_list = ["600809","600141","600329"]
rst = rpt[rpt["STK_ID"] in stk_list] # this does not works in pandas
Since pandas not accept above command, how to achieve the target?
👻 Read also: what is the best laptop for engineering students in 2022?
Filter dataframe rows if value in column is in a set list of values filter: Questions
List comprehension vs. lambda + filter
5 answers
I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items.
My code looked like this:
my_list = [x for x in my_list if x.attribute == value]
But then I thought, wouldn"t it be better to write it like this?
my_list = filter(lambda x: x.attribute == value, my_list)
It"s more readable, and if needed for performance the lambda could be taken out to gain something.
Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way‚Ñ¢ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)?
Answer #1
It is strange how much beauty varies for different people. I find the list comprehension much clearer than filter
+lambda
, but use whichever you find easier.
There are two things that may slow down your use of filter
.
The first is the function call overhead: as soon as you use a Python function (whether created by def
or lambda
) it is likely that filter will be slower than the list comprehension. It almost certainly is not enough to matter, and you shouldn"t think much about performance until you"ve timed your code and found it to be a bottleneck, but the difference will be there.
The other overhead that might apply is that the lambda is being forced to access a scoped variable (value
). That is slower than accessing a local variable and in Python 2.x the list comprehension only accesses local variables. If you are using Python 3.x the list comprehension runs in a separate function so it will also be accessing value
through a closure and this difference won"t apply.
The other option to consider is to use a generator instead of a list comprehension:
def filterbyvalue(seq, value):
for el in seq:
if el.attribute==value: yield el
Then in your main code (which is where readability really matters) you"ve replaced both list comprehension and filter with a hopefully meaningful function name.
Answer #2
This is a somewhat religious issue in Python. Even though Guido considered removing map
, filter
and reduce
from Python 3, there was enough of a backlash that in the end only reduce
was moved from built-ins to functools.reduce.
Personally I find list comprehensions easier to read. It is more explicit what is happening from the expression [i for i in list if i.attribute == value]
as all the behaviour is on the surface not inside the filter function.
I would not worry too much about the performance difference between the two approaches as it is marginal. I would really only optimise this if it proved to be the bottleneck in your application which is unlikely.
Also since the BDFL wanted filter
gone from the language then surely that automatically makes list comprehensions more Pythonic ;-)
Filter dataframe rows if value in column is in a set list of values filter: Questions
How do I do a not equal in Django queryset filtering?
5 answers
In Django model QuerySets, I see that there is a __gt
and __lt
for comparative values, but is there a __ne
or !=
(not equals)? I want to filter out using a not equals. For example, for
Model:
bool a;
int x;
I want to do
results = Model.objects.exclude(a=True, x!=5)
The !=
is not correct syntax. I also tried __ne
.
I ended up using:
results = Model.objects.exclude(a=True, x__lt=5).exclude(a=True, x__gt=5)
Answer #1
You can use Q objects for this. They can be negated with the ~
operator and combined much like normal Python expressions:
from myapp.models import Entry
from django.db.models import Q
Entry.objects.filter(~Q(id=3))
will return all entries except the one(s) with 3
as their ID:
[<Entry: Entry object>, <Entry: Entry object>, <Entry: Entry object>, ...]
Filter dataframe rows if value in column is in a set list of values
1 answers
I have a Python pandas DataFrame rpt
:
rpt
<class "pandas.core.frame.DataFrame">
MultiIndex: 47518 entries, ("000002", "20120331") to ("603366", "20091231")
Data columns:
STK_ID 47518 non-null values
STK_Name 47518 non-null values
RPT_Date 47518 non-null values
sales 47518 non-null values
I can filter the rows whose stock id is "600809"
like this: rpt[rpt["STK_ID"] == "600809"]
<class "pandas.core.frame.DataFrame">
MultiIndex: 25 entries, ("600809", "20120331") to ("600809", "20060331")
Data columns:
STK_ID 25 non-null values
STK_Name 25 non-null values
RPT_Date 25 non-null values
sales 25 non-null values
and I want to get all the rows of some stocks together, such as ["600809","600141","600329"]
. That means I want a syntax like this:
stk_list = ["600809","600141","600329"]
rst = rpt[rpt["STK_ID"] in stk_list] # this does not works in pandas
Since pandas not accept above command, how to achieve the target?
Answer #1
Use the isin
method:
rpt[rpt["STK_ID"].isin(stk_list)]