question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Scraping more than 50 / 100 Comment [Issue]

See original GitHub issue

I noticed when scraping more than 50 or 100, it returns the JSON pair comment_full as NONE. Please consider my issue @kevinzg and @neon-ninja

I think comment rendering issue or else timeout

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:13 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
neon-ninjacommented, Apr 30, 2021

For comments, yes. See what happens when you try view comments on https://m.facebook.com/story.php?story_fbid=1517650235239786&id=144655535872603 in an incognito window.

1reaction
neon-ninjacommented, Apr 30, 2021

With the code:

from facebook_scraper import *
import time

start = time.time()
posts = list(get_posts(
    post_urls=["https://m.facebook.com/story.php?story_fbid=1517650235239786&id=144655535872603"],
    cookies="cookies.txt",
    options={"comments":500}
))
print(f"{len(posts[0].get('comments_full'))} comments extracted in {round(time.time() - start)}s")

I get the output: 510 comments extracted in 86s

Read more comments on GitHub >

github_iconTop Results From Across the Web

YouTube comment scraper returns limited results
The problem is that it doesn't return all the comments. In fact, it always returns a vector with 283 elements, regardless of how...
Read more >
How Web Scrape Multiple Pages with ONE Function with Python
Many websites use the same template for multiple pages of data, and this video shows you can create a single function to scrape...
Read more >
How to Scrape Websites Without Getting Blocked - ScrapeHero
Check if Website is Changing Layouts. Some websites make it tricky for scrapers, serving slightly different layouts. For example, in a website ...
Read more >
DOs and DON'Ts of Web Scraping - ZenRows
Some actions are independent of the website you are scraping: get HTML, parse it, queue new links to crawl, store content, and more....
Read more >
How to Scrape Multiple Pages of a Website Using Python?
The above technique is absolutely wonderful, but what if you need to scrape different pages, and you don't know their page numbers? You'll...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found