Download large file in python with requests

| | | | | | | | | | | | | | | | | |

Requests is a really nice library. I"d like to use it for downloading big files (>1GB). The problem is it"s not possible to keep whole file in memory; I need to read it in chunks. And this is a problem with the following code:

import requests

def DownloadFile(url)
    local_filename = url.split("/")[-1]
    r = requests.get(url)
    f = open(local_filename, "wb")
    for chunk in r.iter_content(chunk_size=512 * 1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
    f.close()
    return 

For some reason it doesn"t work this way: it still loads the response into memory before it is saved to a file.

UPDATE

If you need a small client (Python 2.x /3.x) which can download big files from FTP, you can find it here. It supports multithreading & reconnects (it does monitor connections) also it tunes socket params for the download task.