Skip to content

Commit

Permalink
fix: use a new scrape_item instead of the old one
Browse files Browse the repository at this point in the history
  • Loading branch information
NTFSvolume committed Jan 30, 2025
1 parent d63e634 commit 503a28a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion cyberdrop_dl/scraper/crawlers/xenforo_crawler.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ async def handle_link(self, scrape_item: ScrapeItem, link: URL) -> None:
assert link.host
new_scrape_item = self.create_scrape_item(scrape_item, link)
if self.is_attachment(link):
return await self.handle_internal_link(scrape_item)
return await self.handle_internal_link(new_scrape_item)
if self.primary_base_domain.host in link.host: # type: ignore
origin = scrape_item.parents[0]
return log(f"Skipping nested thread URL {link} found on {origin}", 10)
Expand Down

0 comments on commit 503a28a

Please sign in to comment.