Fix bug of not fetch update if manual library refresh, no auto
If somehow manga missed check period, we would not give new next update cycle and it would forever left behind
* Add support to kotlin.time APIs in the rate limit interceptor.
* Add a missing line break in the doc.
* Move the specific host to the same file.
* Add kotlin.time rule to Proguard and remove specific host rule.
* Mark the old version as deprecated and address review.
* Remove unused import.
* Remove yet another unused import.
* Add Predict Interval Test
* Get mangas next update and interval in library update
* Get next update and interval in backup restore
* Display and set intervals, nextUpdate in Manga Info
* Move logic function to MangeScreen and InfoHeader
Update per suggestion
---------
Co-authored-by: arkon <arkon@users.noreply.github.com>
* refactor: backup and restore to support cross device sync.
* chore: Updated string resources
* refactor: change function name.
* refactor: Use URI SyncHolder.kt not needed anymore.
* feat: added migrations.
* feat: create triggers, account for new installs.
* feat: update mappers to include the new field.
* feat: update backupManga and backupChapter.
Include the new fields to be backed up as well.
* feat: add sql query to fetch all manga with `last_favorited_at` field.
* feat: version bump.
* chore: revert and refactor.
* chore: forgot to lower case the field name.
* chore: added getAllManga query as well renamed `fetchMangaWithLastFavorite` to `getMangasWithFavoriteTimestamp`
* chore: oops that's not meant to be there.
* feat: back fill and set last_modified_at to not null.
* chore: remove redundant triggers.
* fix: build error, accidentally removed insert.
* fix: build error, accidentally removed insert.
* refactor: review pointer, make fields not null.
* Serialize whole chapter numbers without decimal point and add library categories to genre
* added Tachiyomi specific ComicInfo Category field
* lint
* implemented requested changes
* Dialog for service tracker removal added, anilist query prepared
* added API delete requests for Mal and Kitsu
* implement and fix tracker delete for anilist, shikimori, mangaupdates
* implement and test mal delete request
* Update to dialog text to reflect current tracker
* finish kitsu api request and block bangumi tracker removal
* Change delete flag into interface, localise strings, clean up logs
* Add shikimori delete compatibility for already existing entries
* update track delete dialog prompt to include checkbox, update strings
* Update i18n/src/main/res/values/strings.xml
Co-authored-by: stevenyomi <95685115+stevenyomi@users.noreply.github.com>
* Update i18n/src/main/res/values/strings.xml
---------
Co-authored-by: unknown <zaghdane@fireflow.de>
Co-authored-by: arkon <arkon@users.noreply.github.com>
Co-authored-by: stevenyomi <95685115+stevenyomi@users.noreply.github.com>
* Rename removeFromQueueByPredicate to removeFromQueueIf
Follow-up to PR comment in #9511
* Make Download hashCode stable
Mutating pages would previously change the Download hashCode, which
breaks HashMap lookups.
* Convert Donwloader subscription to coroutine
Replace downloadsRelay with activeDownloadsFlow. Instead of managing
a PublishRelay independent from the queue, derive a Flow of active
downloads directly from the queue StateFlow. (This will allow
updating the queue without pausing the downloader, to be done in a
follow-up PR.)
When a download completes successfully, the downloads is removed from
queueState. This updates activeDownloadsFlow and causes the
downloaderJob start the download job for the next active download.
When a download fails, the download is left in the queue, so
queueState is not modified. To make activeDownloadsFlow update
without a change to queueState, use transformLatest and use the
Download statusFlows to suspend until a download reaches the ERROR
state.
To avoid stopping and starting downloads every time
activeDownloadsFlow emits a new value, maintain a map of current
download Jobs and only start/stop jobs in the difference between
downloadJobs and activeDownloads. To make sure all child download
jobs are cancelled when the top-level downloader job is cancelled,
use supervisorScope.
* Remove obsolete main thread references in Downloader
Thread safety of the queue state used to be guaranteed by running all
queue mutation on the main thread, but this has not been true for
some time. Since the queue state is now backed by a StateFlow,
queueState can be safely updated by any thread.
Fetch each source image URL immediately before downloading each image
instead of fetching all URLs and then downloading all images.
Source image URLs may change, so the downloader may fail if there is
too long a delay between fetching the image URL and downloading the
image.
* change the directory's name for a download when the chapter's name is only composed of numbers or is blank
* maj in case the chapter name is blank or empty
* clean code