Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
4.3k views
in Technique[技术] by (71.8m points)

python - Is there a quicker way to filter a Pandas data frame based on the number of recurring values?

Currently I am using the following function;

 df['i'] = df.groupby(['i']).filter(lambda i: len(i) > 500)

This works as intended, tested on other data frames, except when dealing with large quantities of groups. I am trying to use this with around 50,000 groups and have thus far not seen my program process this line. The longest I have let the program run is a bit under 48 hours.

Edit: The method works fine for large groups assuming the lambda function does not remove all the groups. decreasing the minimum length a group can be to 250 allowed the program to execute within 30 seconds.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

This is a case for parallel computing if your processor has multiple cores...

from multiprocessing import Pool, cpu_count

def applyParallel(dfGrouped, func):
    with Pool(cpu_count()) as p:
        ret_list = p.map(func, [group for name, group in dfGrouped])
    return pandas.concat(ret_list)

this does not cover every situation that you put a function on but will cover yours.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...