question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Only the initial URL is crawled, and --max-crawl-depth is ignored

See original GitHub issue

When running lighthouse-parade using npx lighthouse-parade <url>, or npx lighthouse-parade <url> --max-crawl-depth 2, only the initial URL is reported on.

  • Several of us have tried this and had the same result
  • We have tried this using several node versions
  • We have tried using multiple websites and in each case validated that they have links to other URLs on the same site

Curious if anyone else is seeing the same issue? We are both on Windows.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
calebebycommented, May 24, 2022

Published in v2.0.2

1reaction
calebebycommented, May 23, 2022

It turns out I had been testing with 2.0.0, instead of the latest 2.0.1. 2.0.1 has this bug, 2.0.0 works correctly when --include-path-glob is not passed.

I’ve opened a PR to fix this: https://github.com/cloudfour/lighthouse-parade/pull/103

That should be merged soon, in the meantime you can pass --include-path-glob "/**" or use 2.0.0

Read more comments on GitHub >

github_iconTop Results From Across the Web

List of URLs vs. a site crawl · Issue #79 · sjdirect/abot - GitHub
Hello, I would like to configure the abot crawler to index a list of URLs instead of actually crawling an entire site.
Read more >
Web Source JSON Modification - Coveo Documentation
When the value is 0 , only the Site URL root page is crawled and all of its links are ignored. The default...
Read more >
HTTPS report - Search Console Help - Google Support
This report is available only for Domain properties and HTTPS URL-prefix ... 404 responses for the first several HTTPS URLs that it tried...
Read more >
SMILA/Documentation/Importing/Crawler/Web - Eclipsepedia
The crawler will just ignore all links that start exactly (case-sensitive) with one of the "Disallow:" values. The only exception is an empty...
Read more >
Web Connector Configuration Reference
The URL(s) that the crawler will start crawling from, for example: ... using authentication, this checkbox has no effect on the crawl and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found