Autoheal functionality

This commit is contained in:
Ben Sarmiento
2023-10-24 14:56:03 +02:00
parent 16668b90a1
commit 003ba1eb51
8 changed files with 362 additions and 276 deletions

126
README.md
View File

@@ -1,16 +1,29 @@
# zurg
# zurg-testing
## Building
A self-hosted Real-Debrid webdav server written from scratch, alternative to rclone_rd
```bash
docker build -t ghcr.io/debridmediamanager/zurg:latest .
```
## How to run zurg in 5 steps
This builds zurg
1. Clone this repo `git clone https://github.com/debridmediamanager/zurg-testing.git`
2. Add your token in `config.yml`
3. `sudo mkdir -p /mnt/zurg`
4. Run `docker compose up -d`
5. `time ls -1R /mnt/zurg` You're done!
The server is also exposed to your localhost via port 9999. You can point [Infuse](https://firecore.com/infuse) or any webdav clients to it.
> Note: I have only tested this in Mac and Linux
## Why zurg? Why not rclone_rd? Why not Real-Debrid's own webdav?
- Better performance than anything out there; changes in your library appear instantly (assuming Plex picks it up fast enough)
- You should be able to access every file even if the torrent names are the same so if you have a lot of these, you might notice that zurg will have more files compared to others (e.g. 2 torrents named "Simpsons" but have different seasons, zurg merges all contents in that directory)
- You can configure a flexible directory structure in `config.yml`; you can select individual torrents that should appear on a directory by the ID you see in [DMM](https://debridmediamanager.com/)
- If you've ever experienced Plex scanner being stuck on a file and thereby freezing Plex completely, it should not happen anymore because zurg does a comprehensive check if a torrent is dead or not
## config.yml
You need a `config.yml` created before you use zurg
You need a `config.yml` created before you can use zurg
```yaml
# Zurg configuration version
@@ -18,27 +31,26 @@ zurg: v1
token: YOUR_TOKEN_HERE
port: 9999
concurrent_workers: 10
check_for_changes_every_secs: 15
info_cache_time_hours: 12
concurrent_workers: 10 # the higher the number the faster zurg runs through your library but too high and you will get rate limited
check_for_changes_every_secs: 15 # zurg polls real-debrid for changes in your library
info_cache_time_hours: 12 # how long do we want to check if a torrent is still alive or dead? 12 to 24 hours is good enough
# repair fixes broken links, but it doesn't mean it will appear on the same location (especially if there's only 1 episode missing)
enable_repair: false # BEWARE! THERE CAN ONLY BE 1 INSTANCE OF ZURG THAT SHOULD REPAIR YOUR TORRENTS
# List of directory definitions and their filtering rules
directories:
# Configuration for TV shows
shows:
group: media # directories on different groups have duplicates of the same torrent
filters:
- regex: /season[\s\.]?\d/i # Capture torrent names with the term 'season' in any case
- regex: /Saison[\s\.]?\d/i # For non-English namings
- regex: /stage[\s\.]?\d/i
- regex: /saison[\s\.]?\d/i # For non-English namings
- regex: /stagione[\s\.]?\d/i # if there's french, there should be italian too
- regex: /s\d\d/i # Capture common season notations like S01, S02, etc.
- regex: /\btv/i # anything that has TV in it is a TV show, right?
- contains: complete
- contains: seasons
- id: ATUWVRF53X5DA
- contains_strict: PM19
- contains_strict: Detective Conan Remastered
- contains_strict: Goblin Slayer
# Configuration for movies
movies:
@@ -46,87 +58,17 @@ directories:
filters:
- regex: /.*/ # you cannot leave a directory without filters because it will not have any torrents in it
# Configuration for Dolby Vision content
"hd movies":
group: another
filters:
- regex: /\b2160|\b4k|\buhd|\bdovi|\bdolby.?vision|\bdv|\bremux/i # Matches abbreviations of 'dolby vision'
"low quality":
group: another
"ALL MY STUFFS":
group: all # notice the group now is "all", which means it will have all the torrents of shows+movies combined because this directory is alone in this group
filters:
- regex: /.*/
# Configuration for children's content
kids:
"Kids":
group: kids
filters:
- contains: xxx # Ensures adult content is excluded
- id: XFPQ5UCMUVAEG # Specific inclusion by torrent ID
- not_contains: xxx # Ensures adult content is excluded
- id: XFPQ5UCMUVAEG # Specific inclusion by torrent ID
- id: VDRPYNRPQHEXC
- id: YELNX3XR5XJQM
```
## Running
### Standalone webdav server
```bash
docker run -v ./config.yml:/app/config.yml -v zurgdata:/app/data -p 9999:9999 ghcr.io/debridmediamanager/zurg:latest
```
- Runs zurg on port 9999 on your localhost
- Make sure you have config.yml on the current directory
- It creates a `zurgdata` volume for the data files
### with rclone
You will need to create a `media` directory to make the rclone mount work.
```yaml
version: '3.8'
services:
zurg:
image: ghcr.io/debridmediamanager/zurg:latest
restart: unless-stopped
ports:
- 9999
volumes:
- ./config.yml:/app/config.yml
- zurgdata:/app/data
rclone:
image: rclone/rclone:latest
restart: unless-stopped
environment:
TZ: Europe/Berlin
PUID: 1000
PGID: 1000
volumes:
- ./media:/data:rshared
- ./rclone.conf:/config/rclone/rclone.conf
cap_add:
- SYS_ADMIN
security_opt:
- apparmor:unconfined
devices:
- /dev/fuse:/dev/fuse:rwm
command: "mount zurg: /data --allow-non-empty --allow-other --uid 1000 --gid 1000 --dir-cache-time 1s --read-only"
volumes:
zurgdata:
```
Together with this `docker-compose.yml` you will need this `rclone.conf` as well on the same directory.
```
[zurg]
type = http
url = http://zurg:9999/http
no_head = false
no_slash = true
```

View File

@@ -6,6 +6,7 @@ port: 9999
concurrent_workers: 10
check_for_changes_every_secs: 15
info_cache_time_hours: 12
enable_repair: true # BEWARE! THERE CAN ONLY BE 1 INSTANCE OF ZURG THAT SHOULD REPAIR YOUR TORRENTS
# List of directory definitions and their filtering rules
directories:

View File

@@ -13,6 +13,7 @@ type ConfigInterface interface {
GetNumOfWorkers() int
GetRefreshEverySeconds() int
GetCacheTimeHours() int
EnableRepair() bool
GetPort() string
GetDirectories() []string
MeetsConditions(directory, fileID, fileName string) bool

View File

@@ -7,4 +7,5 @@ type ZurgConfig struct {
NumOfWorkers int `yaml:"concurrent_workers"`
RefreshEverySeconds int `yaml:"check_for_changes_every_secs"`
CacheTimeHours int `yaml:"info_cache_time_hours"`
CanRepair bool `yaml:"enable_repair"`
}

View File

@@ -40,6 +40,10 @@ func (z *ZurgConfigV1) GetCacheTimeHours() int {
return z.CacheTimeHours
}
func (z *ZurgConfigV1) EnableRepair() bool {
return z.CanRepair
}
func (z *ZurgConfigV1) GetDirectories() []string {
rootDirectories := make([]string, len(z.Directories))
i := 0

View File

@@ -1,7 +1,6 @@
package dav
import (
"log"
"path/filepath"
"github.com/debridmediamanager.com/zurg/internal/torrent"
@@ -52,7 +51,7 @@ func createSingleTorrentResponse(basePath string, torrents []torrent.Torrent) (*
for _, torrent := range torrents {
for _, file := range torrent.SelectedFiles {
if file.Link == "" {
log.Println("File has no link, skipping (repairing links take time)", file.Path)
// log.Println("File has no link, skipping (repairing links take time)", file.Path)
continue
}

View File

@@ -2,7 +2,6 @@ package http
import (
"fmt"
"log"
"path/filepath"
"github.com/debridmediamanager.com/zurg/internal/torrent"
@@ -42,7 +41,7 @@ func createSingleTorrentResponse(basePath string, torrents []torrent.Torrent) (s
for _, torrent := range torrents {
for _, file := range torrent.SelectedFiles {
if file.Link == "" {
log.Println("File has no link, skipping", file.Path)
// log.Println("File has no link, skipping", file.Path)
continue
}

View File

@@ -17,6 +17,7 @@ import (
type TorrentManager struct {
torrents []Torrent
inProgress []string
checksum string
config config.ConfigInterface
cache *expirable.LRU[string, string]
@@ -27,28 +28,54 @@ type TorrentManager struct {
// it will fetch all torrents and their info in the background
// and store them in-memory
func NewTorrentManager(config config.ConfigInterface, cache *expirable.LRU[string, string]) *TorrentManager {
handler := &TorrentManager{
t := &TorrentManager{
config: config,
cache: cache,
workerPool: make(chan bool, config.GetNumOfWorkers()),
}
// Initialize torrents for the first time
handler.torrents = handler.getAll()
t.torrents = t.getFreshListFromAPI()
t.checksum = t.getChecksum()
// log.Println("First checksum", t.checksum)
go t.mapToDirectories()
for _, torrent := range handler.torrents {
go func(id string) {
handler.workerPool <- true
handler.getInfo(id)
<-handler.workerPool
time.Sleep(1 * time.Second) // sleep for 1 second to avoid rate limiting
}(torrent.ID)
var wg sync.WaitGroup
for i := range t.torrents {
wg.Add(1)
go func(idx int) {
defer wg.Done()
t.workerPool <- true
t.addMoreInfo(&t.torrents[idx])
<-t.workerPool
}(i)
}
if t.config.EnableRepair() {
go t.repairAll(&wg)
}
// Start the periodic refresh
go handler.refreshTorrents()
go t.startRefreshJob()
return handler
return t
}
func (t *TorrentManager) repairAll(wg *sync.WaitGroup) {
wg.Wait()
for _, torrent := range t.torrents {
if torrent.ForRepair {
log.Println("Issues detected on", torrent.Name, "; fixing...")
t.repair(torrent.ID, torrent.SelectedFiles)
}
if len(torrent.Links) == 0 {
// If the torrent has no links
// and already processing repair
// delete it!
realdebrid.DeleteTorrent(t.config.GetToken(), torrent.ID)
}
}
}
// GetByDirectory returns all torrents that have a file in the specified directory
@@ -64,126 +91,145 @@ func (t *TorrentManager) GetByDirectory(directory string) []Torrent {
return torrents
}
// RefreshInfo refreshes the info for a torrent
func (t *TorrentManager) RefreshInfo(torrentID string) {
filePath := fmt.Sprintf("data/%s.bin", torrentID)
// Check the last modified time of the .bin file
fileInfo, err := os.Stat(filePath)
if err == nil {
modTime := fileInfo.ModTime()
// If the file was modified less than an hour ago, don't refresh
if time.Since(modTime) < time.Duration(t.config.GetCacheTimeHours())*time.Hour {
return
}
err = os.Remove(filePath)
if err != nil && !os.IsNotExist(err) { // File doesn't exist or other error
log.Printf("Cannot remove file: %v\n", err)
}
} else if !os.IsNotExist(err) { // Error other than file not existing
log.Printf("Error checking file info: %v\n", err)
return
}
info := t.getInfo(torrentID)
log.Println("Refreshed info for", info.Name)
}
// MarkFileAsDeleted marks a file as deleted
func (t *TorrentManager) MarkFileAsDeleted(torrent *Torrent, file *File) {
log.Println("Marking file as deleted", file.Path)
file.Link = ""
t.writeToFile(torrent.ID, torrent)
t.writeToFile(torrent)
log.Println("Healing a single file in the torrent", torrent.Name)
t.heal(torrent.ID, []File{*file})
t.repair(torrent.ID, []File{*file})
}
// GetInfo returns the info for a torrent
func (t *TorrentManager) GetInfo(torrentID string) *Torrent {
for i := range t.torrents {
if t.torrents[i].ID == torrentID {
return &t.torrents[i]
// FindAllTorrentsWithName finds all torrents in a given directory with a given name
func (t *TorrentManager) FindAllTorrentsWithName(directory, torrentName string) []Torrent {
var matchingTorrents []Torrent
torrents := t.GetByDirectory(directory)
for i := range torrents {
if torrents[i].Name == torrentName || strings.HasPrefix(torrents[i].Name, torrentName) {
matchingTorrents = append(matchingTorrents, torrents[i])
}
}
return t.getInfo(torrentID)
return matchingTorrents
}
// getChecksum returns the checksum based on the total count and the first torrent's ID
func (t *TorrentManager) getChecksum() string {
torrents, totalCount, err := realdebrid.GetTorrents(t.config.GetToken(), 1)
if err != nil {
log.Printf("Cannot get torrents: %v\n", err)
return t.checksum
// findAllDownloadedFilesFromHash finds all files that were with a given hash
func (t *TorrentManager) findAllDownloadedFilesFromHash(hash string) []File {
var files []File
for _, torrent := range t.torrents {
if torrent.Hash == hash {
for _, file := range torrent.SelectedFiles {
if file.Link != "" {
files = append(files, file)
}
}
}
}
return files
}
type torrentsResponse struct {
torrents []realdebrid.Torrent
totalCount int
}
func (t *TorrentManager) getChecksum() string {
torrentsChan := make(chan torrentsResponse)
countChan := make(chan int)
errChan := make(chan error, 2) // accommodate errors from both goroutines
// GetTorrents request
go func() {
torrents, totalCount, err := realdebrid.GetTorrents(t.config.GetToken(), 1)
if err != nil {
errChan <- err
return
}
torrentsChan <- torrentsResponse{torrents: torrents, totalCount: totalCount}
}()
// GetActiveTorrentCount request
go func() {
count, err := realdebrid.GetActiveTorrentCount(t.config.GetToken())
if err != nil {
errChan <- err
return
}
countChan <- count.DownloadingCount
}()
var torrents []realdebrid.Torrent
var totalCount, count int
for i := 0; i < 2; i++ {
select {
case torrentsResp := <-torrentsChan:
torrents = torrentsResp.torrents
totalCount = torrentsResp.totalCount
case count = <-countChan:
case err := <-errChan:
log.Printf("Error: %v\n", err)
return ""
}
}
if len(torrents) == 0 {
log.Println("Huh, no torrents returned")
return t.checksum
return ""
}
return fmt.Sprintf("%d-%s-%v", totalCount, torrents[0].ID, torrents[0].Progress == 100)
checksum := fmt.Sprintf("%d%s%d", totalCount, torrents[0].ID, count)
return checksum
}
// refreshTorrents periodically refreshes the torrents
func (t *TorrentManager) refreshTorrents() {
// startRefreshJob periodically refreshes the torrents
func (t *TorrentManager) startRefreshJob() {
log.Println("Starting periodic refresh")
for {
<-time.After(time.Duration(t.config.GetRefreshEverySeconds()) * time.Second)
checksum := t.getChecksum()
if checksum == t.checksum {
continue
}
t.checksum = checksum
t.cache.Purge()
newTorrents := t.getAll()
newTorrents := t.getFreshListFromAPI()
var wg sync.WaitGroup
// Identify removed torrents
for i := 0; i < len(t.torrents); i++ {
found := false
for _, newTorrent := range newTorrents {
if t.torrents[i].ID == newTorrent.ID {
found = true
break
}
}
if !found {
// Remove this torrent from the slice
t.torrents = append(t.torrents[:i], t.torrents[i+1:]...)
i-- // Decrement index since we modified the slice
}
for i := range newTorrents {
wg.Add(1)
go func(idx int) {
defer wg.Done()
t.workerPool <- true
t.addMoreInfo(&newTorrents[idx])
<-t.workerPool
}(i)
}
wg.Wait()
// Identify and handle added torrents
for _, newTorrent := range newTorrents {
found := false
for _, torrent := range t.torrents {
if newTorrent.ID == torrent.ID {
found = true
break
}
}
if !found {
t.torrents = append(t.torrents, newTorrent)
go func(id string) {
t.workerPool <- true
t.getInfo(id)
<-t.workerPool
time.Sleep(1 * time.Second) // sleep for 1 second to avoid rate limiting
}(newTorrent.ID)
}
// apply side effects
t.torrents = newTorrents
t.checksum = t.getChecksum()
// log.Println("Checksum changed", t.checksum)
if t.config.EnableRepair() {
go t.repairAll(&wg)
}
go t.mapToDirectories()
}
}
// getAll returns all torrents
func (t *TorrentManager) getAll() []Torrent {
log.Println("Getting all torrents")
torrents, totalCount, err := realdebrid.GetTorrents(t.config.GetToken(), 0)
// getFreshListFromAPI returns all torrents
func (t *TorrentManager) getFreshListFromAPI() []Torrent {
torrents, _, err := realdebrid.GetTorrents(t.config.GetToken(), 0)
if err != nil {
log.Printf("Cannot get torrents: %v\n", err)
return nil
}
t.checksum = fmt.Sprintf("%d-%s", totalCount, torrents[0].ID)
// convert to own internal type without SelectedFiles yet
// populate inProgress
var torrentsV2 []Torrent
t.inProgress = t.inProgress[:0] // reset
for _, torrent := range torrents {
torrent.Name = strings.TrimSuffix(torrent.Name, "/")
torrentV2 := Torrent{
@@ -191,52 +237,48 @@ func (t *TorrentManager) getAll() []Torrent {
SelectedFiles: nil,
}
torrentsV2 = append(torrentsV2, torrentV2)
}
log.Printf("Fetched %d torrents", len(torrentsV2))
version := t.config.GetVersion()
if version == "v1" {
configV1 := t.config.(*config.ZurgConfigV1)
groupMap := configV1.GetGroupMap()
for group, directories := range groupMap {
log.Printf("Processing directory group: %s\n", group)
var directoryMap = make(map[string]int)
for i := range torrents {
for _, directory := range directories {
if configV1.MeetsConditions(directory, torrentsV2[i].ID, torrentsV2[i].Name) {
torrentsV2[i].Directories = append(torrentsV2[i].Directories, directory)
directoryMap[directory]++
break
}
}
}
log.Printf("Finished processing directory group: %v\n", directoryMap)
if torrent.Progress != 100 {
t.inProgress = append(t.inProgress, torrent.Hash)
}
}
log.Println("Finished mapping to groups")
log.Printf("Fetched %d torrents", len(torrentsV2))
return torrentsV2
}
// getInfo returns the info for a torrent
func (t *TorrentManager) getInfo(torrentID string) *Torrent {
torrentFromFile := t.readFromFile(torrentID)
// addMoreInfo updates the selected files for a torrent
func (t *TorrentManager) addMoreInfo(torrent *Torrent) {
// file cache
torrentFromFile := t.readFromFile(torrent.ID)
if torrentFromFile != nil {
torrent := t.getByID(torrentID)
if torrent != nil {
if len(torrentFromFile.SelectedFiles) == len(torrent.Links) {
torrent.SelectedFiles = torrentFromFile.SelectedFiles
return torrent
}
// see if api data and file data still match
// then it means data is still usable
if len(torrentFromFile.Links) == len(torrent.Links) {
torrent.ForRepair = torrentFromFile.ForRepair
torrent.SelectedFiles = torrentFromFile.SelectedFiles
return
}
}
log.Println("Getting info for", torrentID)
info, err := realdebrid.GetTorrentInfo(t.config.GetToken(), torrentID)
// no file data yet as it is still downloading
if torrent.Progress != 100 {
return
}
log.Println("Getting info for", torrent.ID)
info, err := realdebrid.GetTorrentInfo(t.config.GetToken(), torrent.ID)
if err != nil {
log.Printf("Cannot get info: %v\n", err)
return nil
return
}
// SelectedFiles is a subset of Files with only the selected ones
// it also has a Link field, which can be empty
// if it is empty, it means the file is no longer available
// Files+Links together are the same as SelectedFiles
var selectedFiles []File
// if some Links are empty, we need to repair it
forRepair := false
for _, file := range info.Files {
if file.Selected == 0 {
continue
@@ -246,23 +288,29 @@ func (t *TorrentManager) getInfo(torrentID string) *Torrent {
Link: "",
})
}
if len(selectedFiles) != len(info.Links) {
log.Println("Some links has expired for", info.Name)
selectedFiles = t.organizeChaos(info, selectedFiles)
t.heal(torrentID, selectedFiles)
if len(selectedFiles) > len(info.Links) && info.Progress == 100 {
log.Printf("Some links has expired for %s, %s: %d selected but only %d links\n", info.ID, info.Name, len(selectedFiles), len(info.Links))
// chaotic file means RD will not output the desired file selection
// e.g. even if we select just a single mkv, it will output a rar
var isChaotic bool
selectedFiles, isChaotic = t.organizeChaos(info, selectedFiles)
if isChaotic {
log.Println("This torrent is unfixable, ignoring", info.Name, info.ID)
} else {
log.Println("Marking for repair", info.Name)
forRepair = true
}
} else {
// all links are still intact! good!
for i, link := range info.Links {
selectedFiles[i].Link = link
}
}
torrent := t.getByID(torrentID)
if torrent != nil {
torrent.SelectedFiles = selectedFiles
}
if len(torrent.SelectedFiles) > 0 {
t.writeToFile(torrentID, torrent)
}
return torrent
// update the torrent with more data!
torrent.SelectedFiles = selectedFiles
torrent.ForRepair = forRepair
// update file cache
t.writeToFile(torrent)
}
// getByID returns a torrent by its ID
@@ -276,8 +324,8 @@ func (t *TorrentManager) getByID(torrentID string) *Torrent {
}
// writeToFile writes a torrent to a file
func (t *TorrentManager) writeToFile(torrentID string, torrent *Torrent) {
filePath := fmt.Sprintf("data/%s.bin", torrentID)
func (t *TorrentManager) writeToFile(torrent *Torrent) {
filePath := fmt.Sprintf("data/%s.bin", torrent.ID)
file, err := os.Create(filePath)
if err != nil {
log.Fatalf("Failed creating file: %s", err)
@@ -314,27 +362,25 @@ func (t *TorrentManager) readFromFile(torrentID string) *Torrent {
log.Fatalf("Failed decoding file: %s", err)
return nil
}
return &torrent
}
func (t *TorrentManager) reinsertTorrent(oldTorrentID string, missingFiles string, deleteIfFailed bool) bool {
torrent := t.GetInfo(oldTorrentID)
if torrent == nil {
return false
}
func (t *TorrentManager) reinsertTorrent(torrent *Torrent, missingFiles string, deleteIfFailed bool) bool {
// if missingFiles is not provided, look for missing files
if missingFiles == "" {
log.Println("Reinserting whole torrent", torrent.Name)
var selection string
for _, file := range torrent.SelectedFiles {
if file.Link == "" {
selection += fmt.Sprintf("%d,", file.ID)
}
selection += fmt.Sprintf("%d,", file.ID)
}
if selection == "" {
return false
}
missingFiles = selection[:len(selection)-1]
if len(selection) > 0 {
missingFiles = selection[:len(selection)-1]
}
} else {
log.Printf("Reinserting %d missing files for %s", len(strings.Split(missingFiles, ",")), torrent.Name)
}
// reinsert torrent
@@ -371,18 +417,19 @@ func (t *TorrentManager) reinsertTorrent(oldTorrentID string, missingFiles strin
realdebrid.DeleteTorrent(t.config.GetToken(), newTorrentID)
return false
}
if len(info.Links) != len(torrent.SelectedFiles) {
log.Printf("It doesn't fix the problem, got %d but we need %d\n", len(info.Links), len(torrent.SelectedFiles))
log.Printf("It doesn't fix the problem, got %d links but we need %d\n", len(info.Links), len(torrent.SelectedFiles))
realdebrid.DeleteTorrent(t.config.GetToken(), newTorrentID)
return false
}
log.Println("Reinsertion successful, deleting old torrent")
realdebrid.DeleteTorrent(t.config.GetToken(), oldTorrentID)
realdebrid.DeleteTorrent(t.config.GetToken(), torrent.ID)
}
return true
}
func (t *TorrentManager) organizeChaos(info *realdebrid.Torrent, selectedFiles []File) []File {
func (t *TorrentManager) organizeChaos(info *realdebrid.Torrent, selectedFiles []File) ([]File, bool) {
type Result struct {
Response *realdebrid.UnrestrictResponse
}
@@ -395,10 +442,10 @@ func (t *TorrentManager) organizeChaos(info *realdebrid.Torrent, selectedFiles [
for _, link := range info.Links {
wg.Add(1)
sem <- struct{}{} // Acquire semaphore
sem <- struct{}{}
go func(lnk string) {
defer wg.Done()
defer func() { <-sem }() // Release semaphore
defer func() { <-sem }()
unrestrictFn := func() (*realdebrid.UnrestrictResponse, error) {
return realdebrid.UnrestrictCheck(t.config.GetToken(), lnk)
@@ -416,6 +463,7 @@ func (t *TorrentManager) organizeChaos(info *realdebrid.Torrent, selectedFiles [
close(resultsChan)
}()
isChaotic := false
for result := range resultsChan {
found := false
for i := range selectedFiles {
@@ -425,6 +473,8 @@ func (t *TorrentManager) organizeChaos(info *realdebrid.Torrent, selectedFiles [
}
}
if !found {
// "chaos" file, we don't know where it belongs
isChaotic = true
selectedFiles = append(selectedFiles, File{
File: realdebrid.File{
Path: result.Response.Filename,
@@ -436,10 +486,111 @@ func (t *TorrentManager) organizeChaos(info *realdebrid.Torrent, selectedFiles [
}
}
return selectedFiles
return selectedFiles, isChaotic
}
func (t *TorrentManager) heal(torrentID string, selectedFiles []File) {
func (t *TorrentManager) repair(torrentID string, selectedFiles []File) {
torrent := t.getByID(torrentID)
if torrent == nil {
return
}
// check if it is already "being" repaired
found := false
for _, hash := range t.inProgress {
if hash == torrent.Hash {
found = true
break
}
}
if found {
log.Println("Repair in progress, skipping", torrentID)
return
}
// check if it is already repaired
foundFiles := t.findAllDownloadedFilesFromHash(torrent.Hash)
var missingFiles []File
for _, sFile := range selectedFiles {
if sFile.Link == "" {
found := false
for _, fFile := range foundFiles {
if sFile.Path == fFile.Path {
found = true
break
}
}
if !found {
missingFiles = append(missingFiles, sFile)
}
}
}
if len(missingFiles) == 0 {
log.Println(torrent.Name, "is already repaired")
return
}
// then we repair it!
log.Println("Repairing torrent", torrentID)
// check if we can still add more downloads
proceed := t.canCapacityHandle()
if !proceed {
log.Println("Cannot add more torrents, exiting")
return
}
// first solution: add the same selection, maybe it can be fixed by reinsertion?
success := t.reinsertTorrent(torrent, "", true)
if !success {
// if not, last resort: add only the missing files and do it in 2 batches
half := len(missingFiles) / 2
missingFiles1 := getFileIDs(missingFiles[:half])
missingFiles2 := getFileIDs(missingFiles[half:])
if missingFiles1 != "" {
t.reinsertTorrent(torrent, missingFiles1, false)
}
if missingFiles2 != "" {
t.reinsertTorrent(torrent, missingFiles2, false)
}
log.Println("Waiting for downloads to finish")
}
}
func (t *TorrentManager) mapToDirectories() {
// Map to directories
version := t.config.GetVersion()
if version == "v1" {
configV1 := t.config.(*config.ZurgConfigV1)
groupMap := configV1.GetGroupMap()
for group, directories := range groupMap {
log.Printf("Processing directory group: %s\n", group)
var directoryMap = make(map[string]int)
for i := range t.torrents {
for _, directory := range directories {
if configV1.MeetsConditions(directory, t.torrents[i].ID, t.torrents[i].Name) {
// append to t.torrents[i].Directories if not yet there
found := false
for _, dir := range t.torrents[i].Directories {
if dir == directory {
found = true
break
}
}
if !found {
t.torrents[i].Directories = append(t.torrents[i].Directories, directory)
}
directoryMap[directory]++
break
}
}
}
log.Printf("Directory group: %v\n", directoryMap)
}
}
log.Println("Finished mapping to directories")
}
func (t *TorrentManager) canCapacityHandle() bool {
// max waiting time is 45 minutes
const maxRetries = 50
const baseDelay = 1 * time.Second
@@ -451,7 +602,7 @@ func (t *TorrentManager) heal(torrentID string, selectedFiles []File) {
log.Printf("Cannot get active torrent count: %v\n", err)
if retryCount >= maxRetries {
log.Println("Max retries reached. Exiting.")
return
return false
}
delay := time.Duration(math.Pow(2, float64(retryCount))) * baseDelay
if delay > maxDelay {
@@ -464,12 +615,12 @@ func (t *TorrentManager) heal(torrentID string, selectedFiles []File) {
if count.DownloadingCount < count.MaxNumberOfTorrents {
log.Printf("We can still add a new torrent, %d/%d\n", count.DownloadingCount, count.MaxNumberOfTorrents)
break
return true
}
if retryCount >= maxRetries {
log.Println("Max retries reached. Exiting.")
return
return false
}
delay := time.Duration(math.Pow(2, float64(retryCount))) * baseDelay
if delay > maxDelay {
@@ -478,30 +629,18 @@ func (t *TorrentManager) heal(torrentID string, selectedFiles []File) {
time.Sleep(delay)
retryCount++
}
// now we can get the missing files
half := len(selectedFiles) / 2
missingFiles1 := getMissingFiles(0, half, selectedFiles)
missingFiles2 := getMissingFiles(half, len(selectedFiles), selectedFiles)
// first solution: add the same selection, maybe it can be fixed by reinsertion?
success := t.reinsertTorrent(torrentID, "", true)
if !success {
// if not, last resort: add only the missing files and do it in 2 batches
t.reinsertTorrent(torrentID, missingFiles1, false)
t.reinsertTorrent(torrentID, missingFiles2, false)
}
}
func getMissingFiles(start, end int, files []File) string {
var missingFiles string
for i := start; i < end; i++ {
if files[i].File.Selected == 1 && files[i].ID != 0 && files[i].Link == "" {
missingFiles += fmt.Sprintf("%d,", files[i].ID)
func getFileIDs(files []File) string {
var fileIDs string
for _, file := range files {
// this won't include the id=0 files that were "chaos"
if file.File.Selected == 1 && file.ID != 0 && file.Link == "" {
fileIDs += fmt.Sprintf("%d,", file.ID)
}
}
if len(missingFiles) > 0 {
missingFiles = missingFiles[:len(missingFiles)-1]
if len(fileIDs) > 0 {
fileIDs = fileIDs[:len(fileIDs)-1]
}
return missingFiles
return fileIDs
}