Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Reddit errors with Post() takes no arguments #426

Closed
3 tasks done
baccccccc opened this issue Jan 5, 2025 · 0 comments · Fixed by #428
Closed
3 tasks done

[BUG] Reddit errors with Post() takes no arguments #426

baccccccc opened this issue Jan 5, 2025 · 0 comments · Fixed by #428
Assignees
Labels
bug Something isn't working

Comments

@baccccccc
Copy link

baccccccc commented Jan 5, 2025

I have taken steps to troubleshoot my issue first

  • I'm using the latest version of cyberdrop-dl-patched
  • I’ve read the wiki and my issue isn’t already covered.
  • I’ve checked existing issues to ensure this hasn’t been reported already.

Describe the bug

apparently new with 6.1.0 or some other recent version because it used to work.

[01/04/25 17:57:53] ERROR    Scrape Failed: https://www.reddit.com/r/Vivian_Rose (Post() takes no arguments)                                                                                                                        logger.py:27
                             ╭──────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────╮             
                             │ C:\Users\[REDACTED]\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\cyberdrop_dl\utils\utilities.py:42 in wrapper           │             
                             │                                                                                                                                                                                                    │             
                             │    39 │   │   link = item if isinstance(item, URL) else item.url                                                                                                                                   │             
                             │    40 │   │   origin = exc_info = None                                                                                                                                                             │             
                             │    41 │   │   try:                                                                                                                                                                                 │             
                             │ ❱  42 │   │   │   return await func(self, *args, **kwargs)                                                                                                                                         │             
                             │    43 │   │   except CDLBaseError as e:                                                                                                                                                            │             
                             │    44 │   │   │   log_message_short = e_ui_failure = e.ui_message                                                                                                                                  │             
                             │    45 │   │   │   log_message = f"{e.ui_message} - {e.message}" if e.ui_message != e.message                                                                                                       │             
                             │       else e.message                                                                                                                                                                               │             
                             │                                                                                                                                                                                                    │             
                             │ C:\Users\[REDACTED]\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\cyberdrop_dl\scraper\crawlers\reddit_crawler.py:146 in  │             
                             │ post                                                                                                                                                                                               │             
                             │                                                                                                                                                                                                    │             
                             │   143 │   │   │   filename, ext = get_filename_and_ext(media_url.name)                                                                                                                             │             
                             │   144 │   │                                                                                                                                                                                        │             
                             │   145 │   │   if "redd.it" in media_url.host:                                                                                                                                                      │             
                             │ ❱ 146 │   │   │   new_scrape_item = await self.create_new_scrape_item(                                                                                                                             │             
                             │   147 │   │   │   │   media_url,                                                                                                                                                                   │             
                             │   148 │   │   │   │   scrape_item,                                                                                                                                                                 │             
                             │   149 │   │   │   │   title,                                                                                                                                                                       │             
                             │                                                                                                                                                                                                    │             
                             │ C:\Users\[REDACTED]\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages\cyberdrop_dl\scraper\crawlers\reddit_crawler.py:231 in  │             
                             │ create_new_scrape_item                                                                                                                                                                             │             
                             │                                                                                                                                                                                                    │             
                             │   228 │   │   add_parent: URL | None = None,                                                                                                                                                       │             
                             │   229 │   ) -> ScrapeItem:                                                                                                                                                                         │             
                             │   230 │   │   """Creates a new scrape item with the same parent as the old scrape item."""                                                                                                         │             
                             │ ❱ 231 │   │   post = Post(title=title, date=date)                                                                                                                                                  │             
                             │   232 │   │   new_scrape_item = self.create_scrape_item(                                                                                                                                           │             
                             │   233 │   │   │   old_scrape_item,                                                                                                                                                                 │             
                             │   234 │   │   │   link,                                                                                                                                                                            │             
                             ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯             
                             TypeError: Post() takes no arguments                                                                                                                                                                               

Desired result

should work?

Steps to help reproduce the behavior

scrape some reddit URL, e.g. https://www.reddit.com/r/Vivian_Rose (NSFW)

Relevant logs

see above

Operating system/environment

Windows 11 24H2

Python Version

3.13.496.0

Cyberdrop-DL version

6.1.0

Links, references, and/or additional comments?

No response

@baccccccc baccccccc added the bug Something isn't working label Jan 5, 2025
@NTFSvolume NTFSvolume assigned NTFSvolume and unassigned jbsparrow Jan 5, 2025
NTFSvolume added a commit to NTFSvolume/CyberDropDownloader that referenced this issue Jan 5, 2025
NTFSvolume added a commit that referenced this issue Jan 5, 2025
* fix: use a dataclass for reddit posts

Should fix #426

* refactor: pass `scrape_item` as origin for `web_pager` (chevereto)

* fix: "Loose Files" not being created (all crawlers)

* refactor: add custom MediaFireError (mediafire)

* fix: scrape error codes

* refactor: make `RealDebridError` Inherit from `CDLBaseError`

* refactor: move `RealDebridError` to `clients.errors`

* refactor: move `VALIDATION_ERROR_FOOTER ` to `constants`

* fix: undo crawler semaphore

Moved to PR #425
datawhores pushed a commit to datawhores/CyberDropDownloader that referenced this issue Jan 14, 2025
* fix: use a dataclass for reddit posts

Should fix jbsparrow#426

* refactor: pass `scrape_item` as origin for `web_pager` (chevereto)

* fix: "Loose Files" not being created (all crawlers)

* refactor: add custom MediaFireError (mediafire)

* fix: scrape error codes

* refactor: make `RealDebridError` Inherit from `CDLBaseError`

* refactor: move `RealDebridError` to `clients.errors`

* refactor: move `VALIDATION_ERROR_FOOTER ` to `constants`

* fix: undo crawler semaphore

Moved to PR jbsparrow#425
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants