question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Stream response body for large files

See original GitHub issue

Summary

Using response.body is inefficient when downloading large files because it loads the full body in memory. This is not feasible for large zips for instance. We need something like requests’s response.raw to be able to stream the response body to disk without using the whole RAM.

Code example to highligh the difference:

  • scrapy
with open(my_file, 'wb') as f:
    f.write(response.body)
    # loads all of response.body in memory then writes to disk
  • requests
with open(my_file, 'wb') as f:
    with requests.get(url, stream=True) as r:
          shutil.copyfileobj(r.raw, f)
          # memory efficient, only small chunks loaded in memory

Motivation

To be able to download larges files when automatically scrapping a full website.

Describe alternatives you’ve considered

An alternative is to use requests to download the file in a deferred async process_item but I think this is not efficient as it won’t use scrapy’s cache.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:1
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

4reactions
elacuestacommented, Jun 10, 2020

See #4205 as well, which will be released shortly. Unfortunately, while this allows you to read the body in chunks, it doesn’t stop the whole body from being stored in memory. I guess some changes could be done so that we can indicate in the handler whether or not we want the chunk to be appended to the existing body, but it’s not a pretty design IMHO.

Now that we have async/await support, I think a nice API would be something like:

def parse(self, response):
    yield Request("https://example.org", callback=self.parse_stream, stream=True)
    # or maybe
    yield StreamRequest("https://example.org", callback=self.parse_stream)

async def parse_stream(self, response):
    assert response.body is None
    async for data in response.stream_body():
        # do something with the data
1reaction
wRARcommented, Jun 10, 2020

Related to #3880 I think.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Response bodies for files larger than 2 GB? in Fiddler - Telerik
I'm working with trying to capture the streams of video files that are larger than 2 gigs. Normally, I can open a file...
Read more >
Efficiently Streaming Large HTTP Responses With HttpClient
Downloading large files with HttpClient and you see that it takes lots of memory space? This post is probably for you. Let's see...
Read more >
[QUESTION] Streaming Large Files · Issue #58 · tiangolo/fastapi
I am working with this example in the documentation that shows a basic example of how to work with files. Is it possible...
Read more >
How does HTTP Deliver a Large File? | by Carson - Medium
Range request is widely used in video streaming and file download services. ... The response body structure looks like the following.
Read more >
How to stream large HTTP response in spring - Stack Overflow
@GetMapping("/{fileName:[0-9A-z]+}") @ResponseBody public ... getFileName()); long len = 0; try { len = file.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found