question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Unable to retrive the full page of url

See original GitHub issue

I am using httpie 0.9.8 and when i try to download the webpage of dropbox i dont get the page fully downloaded , only partially .

Using this dropbox link : https://www.dropbox.com/sh/erv1tycztizfvyd/AADeXwemV9sK37MSHqxmYz_5a?dl=0

and using the command : http https://www.dropbox.com/sh/erv1tycztizfvyd/AADeXwemV9sK37MSHqxmYz_5a?dl=0 -o page.html

i get the links from 30 files where in reality exists 50 files inside that folder in dropbox if we opened with a normal web browser like firefox or chrome .

Any idea ?

$ http --debug <COMPLETE ARGUMENT LIST THAT TRIGGERS THE ERROR>
<COMPLETE OUTPUT>

Provide any additional information, screenshots, or code examples below:

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
isidenticalcommented, Nov 20, 2021

The reason there are only 30-something in the HTTPie usage and 50-something on the browser is that once the page is fully loaded, the browser starts executing javascript which does light pagination to load more entries. If you want to test this out by yourself, simply block the https://www.dropbox.com/list_shared_link_folder_entries API on your browser’s command toolkit and then try again. This time, you’ll also see only 30-something entries.

I am afraid this is something httpie does not support but it can be achieved by other means (e.g using a headless browser via selenium to fully load the page and then do the extraction).

0reactions
Bhavye-Malhotracommented, Nov 20, 2021

@peterpt hey, I tried to extract the URLs manually by downloading the source HTML page locally and running the script on that HTML file and I found out that it still shows 31 links and not 48 which is the total number of files so I think its a thing from dropbox side and not httpie 😅 hope it helps. image

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unable to find URL/page errors - HostPapa
If you can't find your URL, there are a few different things you can do. Check out this support article to learn more....
Read more >
Python 3: using requests does not get the full content of a web ...
Show activity on this post. I am testing using the requests module to get the content of a webpage. But when I look...
Read more >
What to Do When a Website Won't Load | PCMag
Can't seem to get a specific website to load? Here's everything you can try to figure out the problem and hopefully get it...
Read more >
Why is my page missing from Google Search?
If Google doesn't seem to be finding all the pages on your site, it could indicate that either Google can't find the pages...
Read more >
I Can't Get My Hyperlink to Work - Small Business - Chron.com
Check the Original Website. Use your Web browser to find the original page with the Uniform Resource Locator address that you used for...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found