Timeout for snatched episodes. (Basic failed download function)

neoatomic asked for this feature over 1 year ago — 11 comments

neoatomic commented over 1 year ago

As previous requested in :

From time to time SR snatches a bad nzb or torrent file (password etc.) and the download fails/stalls. If that happens SR leaves the episode on the status "snatched" indefinitely if you are not using NZBtoMedia/post processing.

As many users find setting-up post-processing difficult, it might be an idea to implement a "timeout" for snatched files/episodes. Meaning that when SR snatches an episode the download client has (for example) 24 hours to provide the file to SR. If SR does not find/receive it within this time period the snatch status is reset to "wanted" and the failed nzb/torrent is written to the failed.db to not be used again.

With this function you would have a "basic" failed download function in SR even without using post-processing scripts.

| kossboss commented over 1 year ago

Yes!! please!! ETA?

mlofdahl commented over 1 year ago

But not necessarily back to "wanted". If the snatched episode is an upgrade of a previous download, the status needs to go back to the quality of the existing episode.

| kossboss commented over 1 year ago

If Snatch times out. you want to set them to Failed. So that it doesnt download the same one again. At least thats how I understand Failed status to work (if you enable both Failed options, which in my case I do have both checked). sidenote: if anyones failed dls are getting the same files. stop sickrage, delete cache.db, start sickrage.

But your right it would be best to give a variable option. maybe some want "failed" some might want "wanted"

| xios01 commented over 1 year ago

yes! just need to declare the current snatch failed if the timeout occurs...

| brando56894 commented about 1 year ago

We really need this, I'm constantly going through my library to change 'snatched' to 'failed' and it's pretty annoying, especially when you have like 60+ shows in your database and about 9500 episodes.

| Veldkornet commented about 1 year ago

To be honest, I thought this was already implemented but that it just didn't work... Or what do the settings do under "Settings -> Search Settings -> Episode Search > Use Failed Downloads / Delete Failed"?

What I was expecting, was something similar to the way Sonarr does this. It selects the download based on what ever criteria (number of good downloads, quality, etc), gives it to sabnzbd, actually monitors the download (completion % etc). If it fails, it removes the download, marks it as bad (also on the server like at OzNZB), and looks for another one. There's also a status section where you can see the status of the downloads in %, which have been put onto the blacklist, etc. So there's no manual intervention needed.

If we can get that kind of functionality into SickRage, then that would save me a lot of time ;)

MattPark commented 7 months ago

This has a chance of backfiring if you set the timeout to 2 days (reasonable) and then your nzb downloader goes down for 2 days or worse yet is paused and the nzbs keep backing up. Maybe just checking the queue periodically would work

KevinAnthony commented 5 months ago

Something to think about, some of the NZB/Torrent Downloaders support status. SABnzbd does. It would be really awesome, if it periodically polled the downloaders, and if it's marked as failed, it try's again.

This is the biggest feature keeping my system from running completely autonomously. It would be great to have

Damian79 commented 5 months ago

Agreed, would really appreciate this. I tried "Automatically Retry Failed Downloads with NZBGet FailureLink" but it does not work with my NZB feeds

| unimatrix27 commented 3 months ago

as far as I understand the SABNzbd API there is a function to get failed jobs via the history call. Would it not be a better way of doing this instead of just waiting and guessing what happened? If a NZB fails, Sickrage could try the next one if there was more than one result from the NZB Indexer.

Join the discussion!

with GitHub to comment