mirror of
https://github.com/unshackle-dl/unshackle.git
synced 2025-10-23 15:11:08 +00:00
Compare commits
18 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1d4e8bf9ec | ||
|
|
b4a1f2236e | ||
|
|
3277ab0d77 | ||
|
|
be0f7299f8 | ||
|
|
948ef30de7 | ||
|
|
1bd63ddc91 | ||
|
|
4dff597af2 | ||
|
|
8dbdde697d | ||
|
|
63c697f082 | ||
|
|
3e0835d9fb | ||
|
|
c6c83ee43b | ||
|
|
507690834b | ||
|
|
f8a58d966b | ||
|
|
8d12b735ff | ||
|
|
1aaea23669 | ||
|
|
e3571b9518 | ||
|
|
b478a00519 | ||
|
|
24fb8fb00c |
84
CHANGELOG.md
84
CHANGELOG.md
@@ -5,6 +5,60 @@ All notable changes to this project will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [1.4.0] - 2025-08-05
|
||||
|
||||
### Added
|
||||
|
||||
- **HLG Transfer Characteristics Preservation**: Enhanced video muxing to preserve HLG color metadata
|
||||
- Added automatic detection of HLG video tracks during muxing process
|
||||
- Implemented `--color-transfer-characteristics 0:18` argument for mkvmerge when processing HLG content
|
||||
- Prevents incorrect conversion from HLG (18) to BT.2020 (14) transfer characteristics
|
||||
- Ensures proper HLG playback support on compatible hardware without manual editing
|
||||
- **Original Language Support**: Enhanced language selection with 'orig' keyword support
|
||||
- Added support for 'orig' language selector for both video and audio tracks
|
||||
- Automatically detects and uses the title's original language when 'orig' is specified
|
||||
- Improved language processing logic with better duplicate handling
|
||||
- Enhanced help text to document original language selection usage
|
||||
- **Forced Subtitle Support**: Added option to include forced subtitle tracks
|
||||
- New functionality to download and include forced subtitle tracks alongside regular subtitles
|
||||
- **WebVTT Subtitle Filtering**: Enhanced subtitle processing capabilities
|
||||
- Added filtering for unwanted cues in WebVTT subtitles
|
||||
- Improved subtitle quality by removing unnecessary metadata
|
||||
|
||||
### Changed
|
||||
|
||||
- **DRM Track Decryption**: Improved DRM decryption track selection logic
|
||||
- Enhanced `get_drm_for_cdm()` method usage for better DRM-CDM matching
|
||||
- Added warning messages when no matching DRM is found for tracks
|
||||
- Improved error handling and logging for DRM decryption failures
|
||||
- **Series Tree Representation**: Enhanced episode tree display formatting
|
||||
- Updated series tree to show season breakdown with episode counts
|
||||
- Improved visual representation with "S{season}({count})" format
|
||||
- Better organization of series information in console output
|
||||
- **Hybrid Processing UI**: Enhanced extraction and conversion processes
|
||||
- Added dynamic spinning bars to follow the rest of the codebase design
|
||||
- Improved visual feedback during hybrid HDR processing operations
|
||||
- **Track Selection Logic**: Enhanced multi-track selection capabilities
|
||||
- Fixed track selection to support combining -V, -A, -S flags properly
|
||||
- Improved flexibility in selecting multiple track types simultaneously
|
||||
- **Service Subtitle Support**: Added configuration for services without subtitle support
|
||||
- Services can now indicate if they don't support subtitle downloads
|
||||
- Prevents unnecessary subtitle download attempts for unsupported services
|
||||
- **Update Checker**: Enhanced update checking logic and cache handling
|
||||
- Improved rate limiting and caching mechanisms for update checks
|
||||
- Better performance and reduced API calls to GitHub
|
||||
|
||||
### Fixed
|
||||
|
||||
- **PlayReady KID Extraction**: Enhanced KID extraction from PSSH data
|
||||
- Added base64 support and XML parsing for better KID detection
|
||||
- Fixed issue where only one KID was being extracted for certain services
|
||||
- Improved multi-KID support for PlayReady protected content
|
||||
- **Dolby Vision Detection**: Improved DV codec detection across all formats
|
||||
- Fixed detection of dvhe.05.06 codec which was not being recognized correctly
|
||||
- Enhanced detection logic in Episode and Movie title classes
|
||||
- Better support for various Dolby Vision codec variants
|
||||
|
||||
## [1.3.0] - 2025-08-03
|
||||
|
||||
### Added
|
||||
@@ -15,6 +69,24 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- Enhanced PlayReady and Widevine DRM classes with mp4decrypt decryption support
|
||||
- Service-specific decryption mapping allows choosing between `shaka` and `mp4decrypt` per service
|
||||
- Improved error handling and progress reporting for mp4decrypt operations
|
||||
- **Scene Naming Configuration**: New `scene_naming` option for controlling file naming conventions
|
||||
- Added scene naming logic to movie, episode, and song title classes
|
||||
- Configurable through unshackle.yaml to enable/disable scene naming standards
|
||||
- **Terminal Cleanup and Signal Handling**: Enhanced console management
|
||||
- Implemented proper terminal cleanup on application exit
|
||||
- Added signal handling for graceful shutdown in ComfyConsole
|
||||
- **Configuration Template**: New `unshackle-example.yaml` template file
|
||||
- Replaced main `unshackle.yaml` with example template to prevent git conflicts
|
||||
- Users can now modify their local config without affecting repository updates
|
||||
- **Enhanced Credential Management**: Improved CDM and vault configuration
|
||||
- Expanded credential management documentation in configuration
|
||||
- Enhanced CDM configuration examples and guidelines
|
||||
- **Video Transfer Standards**: Added `Unspecified_Image` option to Transfer enum
|
||||
- Implements ITU-T H.Sup19 standard value 2 for image characteristics
|
||||
- Supports still image coding systems and unknown transfer characteristics
|
||||
- **Update Check Rate Limiting**: Enhanced update checking system
|
||||
- Added configurable update check intervals to prevent excessive API calls
|
||||
- Improved rate limiting for GitHub API requests
|
||||
|
||||
### Changed
|
||||
|
||||
@@ -22,12 +94,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- Updated `dl.py` to handle service-specific decryption method selection
|
||||
- Refactored `Config` class to manage decryption method mapping per service
|
||||
- Enhanced DRM decrypt methods with `use_mp4decrypt` parameter for method selection
|
||||
- **Error Handling**: Improved exception handling in Hybrid class
|
||||
- Replaced log.exit calls with ValueError exceptions for better error propagation
|
||||
- Enhanced error handling consistency across hybrid processing
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Service Track Filtering**: Cleaned up ATVP service to remove unnecessary track filtering
|
||||
- Simplified track return logic to pass all tracks to dl.py for centralized filtering
|
||||
- Removed unused codec and quality filter parameters from service initialization
|
||||
- **Proxy Configuration**: Fixed proxy server mapping in configuration
|
||||
- Renamed 'servers' to 'server_map' in proxy configuration to resolve Nord/Surfshark naming conflicts
|
||||
- Updated configuration structure for better compatibility with proxy providers
|
||||
- **HTTP Vault**: Improved URL handling and key retrieval logic
|
||||
- Fixed URL processing issues in HTTP-based key vaults
|
||||
- Enhanced key retrieval reliability and error handling
|
||||
|
||||
## [1.2.0] - 2025-07-30
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "unshackle"
|
||||
version = "1.3.0"
|
||||
version = "1.4.0"
|
||||
description = "Modular Movie, TV, and Music Archival Software."
|
||||
authors = [{ name = "unshackle team" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
|
||||
@@ -139,7 +139,13 @@ class dl:
|
||||
default=None,
|
||||
help="Wanted episodes, e.g. `S01-S05,S07`, `S01E01-S02E03`, `S02-S02E03`, e.t.c, defaults to all.",
|
||||
)
|
||||
@click.option("-l", "--lang", type=LANGUAGE_RANGE, default="en", help="Language wanted for Video and Audio.")
|
||||
@click.option(
|
||||
"-l",
|
||||
"--lang",
|
||||
type=LANGUAGE_RANGE,
|
||||
default="en",
|
||||
help="Language wanted for Video and Audio. Use 'orig' to select the original language, e.g. 'orig,en' for both original and English.",
|
||||
)
|
||||
@click.option(
|
||||
"-vl",
|
||||
"--v-lang",
|
||||
@@ -148,6 +154,7 @@ class dl:
|
||||
help="Language wanted for Video, you would use this if the video language doesn't match the audio.",
|
||||
)
|
||||
@click.option("-sl", "--s-lang", type=LANGUAGE_RANGE, default=["all"], help="Language wanted for Subtitles.")
|
||||
@click.option("-fs", "--forced-subs", is_flag=True, default=False, help="Include forced subtitle tracks.")
|
||||
@click.option(
|
||||
"--proxy",
|
||||
type=str,
|
||||
@@ -405,6 +412,7 @@ class dl:
|
||||
lang: list[str],
|
||||
v_lang: list[str],
|
||||
s_lang: list[str],
|
||||
forced_subs: bool,
|
||||
sub_format: Optional[Subtitle.Codec],
|
||||
video_only: bool,
|
||||
audio_only: bool,
|
||||
@@ -533,7 +541,12 @@ class dl:
|
||||
events.subscribe(events.Types.TRACK_REPACKED, service.on_track_repacked)
|
||||
events.subscribe(events.Types.TRACK_MULTIPLEX, service.on_track_multiplex)
|
||||
|
||||
if no_subs:
|
||||
if hasattr(service, "NO_SUBTITLES") and service.NO_SUBTITLES:
|
||||
console.log("Skipping subtitles - service does not support subtitle downloads")
|
||||
no_subs = True
|
||||
s_lang = None
|
||||
title.tracks.subtitles = []
|
||||
elif no_subs:
|
||||
console.log("Skipped subtitles as --no-subs was used...")
|
||||
s_lang = None
|
||||
title.tracks.subtitles = []
|
||||
@@ -560,8 +573,31 @@ class dl:
|
||||
)
|
||||
|
||||
with console.status("Sorting tracks by language and bitrate...", spinner="dots"):
|
||||
title.tracks.sort_videos(by_language=v_lang or lang)
|
||||
title.tracks.sort_audio(by_language=lang)
|
||||
video_sort_lang = v_lang or lang
|
||||
processed_video_sort_lang = []
|
||||
for language in video_sort_lang:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
if orig_lang not in processed_video_sort_lang:
|
||||
processed_video_sort_lang.append(orig_lang)
|
||||
else:
|
||||
if language not in processed_video_sort_lang:
|
||||
processed_video_sort_lang.append(language)
|
||||
|
||||
processed_audio_sort_lang = []
|
||||
for language in lang:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
if orig_lang not in processed_audio_sort_lang:
|
||||
processed_audio_sort_lang.append(orig_lang)
|
||||
else:
|
||||
if language not in processed_audio_sort_lang:
|
||||
processed_audio_sort_lang.append(language)
|
||||
|
||||
title.tracks.sort_videos(by_language=processed_video_sort_lang)
|
||||
title.tracks.sort_audio(by_language=processed_audio_sort_lang)
|
||||
title.tracks.sort_subtitles(by_language=s_lang)
|
||||
|
||||
if list_:
|
||||
@@ -592,12 +628,27 @@ class dl:
|
||||
self.log.error(f"There's no {vbitrate}kbps Video Track...")
|
||||
sys.exit(1)
|
||||
|
||||
# Filter out "best" from the video languages list.
|
||||
video_languages = [lang for lang in (v_lang or lang) if lang != "best"]
|
||||
if video_languages and "all" not in video_languages:
|
||||
title.tracks.videos = title.tracks.by_language(title.tracks.videos, video_languages)
|
||||
processed_video_lang = []
|
||||
for language in video_languages:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = (
|
||||
str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
)
|
||||
if orig_lang not in processed_video_lang:
|
||||
processed_video_lang.append(orig_lang)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Original language not available for title, skipping 'orig' selection for video"
|
||||
)
|
||||
else:
|
||||
if language not in processed_video_lang:
|
||||
processed_video_lang.append(language)
|
||||
title.tracks.videos = title.tracks.by_language(title.tracks.videos, processed_video_lang)
|
||||
if not title.tracks.videos:
|
||||
self.log.error(f"There's no {video_languages} Video Track...")
|
||||
self.log.error(f"There's no {processed_video_lang} Video Track...")
|
||||
sys.exit(1)
|
||||
|
||||
if quality:
|
||||
@@ -672,7 +723,8 @@ class dl:
|
||||
self.log.error(f"There's no {s_lang} Subtitle Track...")
|
||||
sys.exit(1)
|
||||
|
||||
title.tracks.select_subtitles(lambda x: not x.forced or is_close_match(x.language, lang))
|
||||
if not forced_subs:
|
||||
title.tracks.select_subtitles(lambda x: not x.forced or is_close_match(x.language, lang))
|
||||
|
||||
# filter audio tracks
|
||||
# might have no audio tracks if part of the video, e.g. transport stream hls
|
||||
@@ -699,8 +751,24 @@ class dl:
|
||||
self.log.error(f"There's no {abitrate}kbps Audio Track...")
|
||||
sys.exit(1)
|
||||
if lang:
|
||||
if "best" in lang:
|
||||
# Get unique languages and select highest quality for each
|
||||
processed_lang = []
|
||||
for language in lang:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = (
|
||||
str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
)
|
||||
if orig_lang not in processed_lang:
|
||||
processed_lang.append(orig_lang)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Original language not available for title, skipping 'orig' selection"
|
||||
)
|
||||
else:
|
||||
if language not in processed_lang:
|
||||
processed_lang.append(language)
|
||||
|
||||
if "best" in processed_lang:
|
||||
unique_languages = {track.language for track in title.tracks.audio}
|
||||
selected_audio = []
|
||||
for language in unique_languages:
|
||||
@@ -710,30 +778,36 @@ class dl:
|
||||
)
|
||||
selected_audio.append(highest_quality)
|
||||
title.tracks.audio = selected_audio
|
||||
elif "all" not in lang:
|
||||
title.tracks.audio = title.tracks.by_language(title.tracks.audio, lang, per_language=1)
|
||||
elif "all" not in processed_lang:
|
||||
per_language = 0 if len(processed_lang) > 1 else 1
|
||||
title.tracks.audio = title.tracks.by_language(
|
||||
title.tracks.audio, processed_lang, per_language=per_language
|
||||
)
|
||||
if not title.tracks.audio:
|
||||
self.log.error(f"There's no {lang} Audio Track, cannot continue...")
|
||||
self.log.error(f"There's no {processed_lang} Audio Track, cannot continue...")
|
||||
sys.exit(1)
|
||||
|
||||
if video_only or audio_only or subs_only or chapters_only or no_subs or no_audio or no_chapters:
|
||||
# Determine which track types to keep based on the flags
|
||||
keep_videos = True
|
||||
keep_audio = True
|
||||
keep_subtitles = True
|
||||
keep_chapters = True
|
||||
keep_videos = False
|
||||
keep_audio = False
|
||||
keep_subtitles = False
|
||||
keep_chapters = False
|
||||
|
||||
# Handle exclusive flags (only keep one type)
|
||||
if video_only:
|
||||
keep_audio = keep_subtitles = keep_chapters = False
|
||||
elif audio_only:
|
||||
keep_videos = keep_subtitles = keep_chapters = False
|
||||
elif subs_only:
|
||||
keep_videos = keep_audio = keep_chapters = False
|
||||
elif chapters_only:
|
||||
keep_videos = keep_audio = keep_subtitles = False
|
||||
if video_only or audio_only or subs_only or chapters_only:
|
||||
if video_only:
|
||||
keep_videos = True
|
||||
if audio_only:
|
||||
keep_audio = True
|
||||
if subs_only:
|
||||
keep_subtitles = True
|
||||
if chapters_only:
|
||||
keep_chapters = True
|
||||
else:
|
||||
keep_videos = True
|
||||
keep_audio = True
|
||||
keep_subtitles = True
|
||||
keep_chapters = True
|
||||
|
||||
# Handle exclusion flags (remove specific types)
|
||||
if no_subs:
|
||||
keep_subtitles = False
|
||||
if no_audio:
|
||||
@@ -741,7 +815,6 @@ class dl:
|
||||
if no_chapters:
|
||||
keep_chapters = False
|
||||
|
||||
# Build the kept_tracks list without duplicates
|
||||
kept_tracks = []
|
||||
if keep_videos:
|
||||
kept_tracks.extend(title.tracks.videos)
|
||||
@@ -765,8 +838,7 @@ class dl:
|
||||
DOWNLOAD_LICENCE_ONLY.set()
|
||||
|
||||
try:
|
||||
# Use transient mode to prevent display remnants
|
||||
with Live(Padding(download_table, (1, 5)), console=console, refresh_per_second=5, transient=True):
|
||||
with Live(Padding(download_table, (1, 5)), console=console, refresh_per_second=5):
|
||||
with ThreadPoolExecutor(downloads) as pool:
|
||||
for download in futures.as_completed(
|
||||
(
|
||||
@@ -839,6 +911,7 @@ class dl:
|
||||
while (
|
||||
not title.tracks.subtitles
|
||||
and not no_subs
|
||||
and not (hasattr(service, "NO_SUBTITLES") and service.NO_SUBTITLES)
|
||||
and not video_only
|
||||
and len(title.tracks.videos) > video_track_n
|
||||
and any(
|
||||
@@ -927,12 +1000,15 @@ class dl:
|
||||
with console.status(f"Decrypting tracks with {decrypt_tool}..."):
|
||||
has_decrypted = False
|
||||
for track in drm_tracks:
|
||||
for drm in track.drm:
|
||||
if hasattr(drm, "decrypt"):
|
||||
drm.decrypt(track.path, use_mp4decrypt=use_mp4decrypt)
|
||||
has_decrypted = True
|
||||
events.emit(events.Types.TRACK_REPACKED, track=track)
|
||||
break
|
||||
drm = track.get_drm_for_cdm(self.cdm)
|
||||
if drm and hasattr(drm, "decrypt"):
|
||||
drm.decrypt(track.path, use_mp4decrypt=use_mp4decrypt)
|
||||
has_decrypted = True
|
||||
events.emit(events.Types.TRACK_REPACKED, track=track)
|
||||
else:
|
||||
self.log.warning(
|
||||
f"No matching DRM found for track {track} with CDM type {type(self.cdm).__name__}"
|
||||
)
|
||||
if has_decrypted:
|
||||
self.log.info(f"Decrypted tracks with {decrypt_tool}")
|
||||
|
||||
@@ -1035,7 +1111,7 @@ class dl:
|
||||
|
||||
multiplex_tasks.append((task_id, task_tracks))
|
||||
|
||||
with Live(Padding(progress, (0, 5, 1, 5)), console=console, transient=True):
|
||||
with Live(Padding(progress, (0, 5, 1, 5)), console=console):
|
||||
for task_id, task_tracks in multiplex_tasks:
|
||||
progress.start_task(task_id) # TODO: Needed?
|
||||
muxed_path, return_code, errors = task_tracks.mux(
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "1.3.0"
|
||||
__version__ = "1.4.0"
|
||||
|
||||
@@ -1,7 +1,4 @@
|
||||
import atexit
|
||||
import logging
|
||||
import signal
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from types import ModuleType
|
||||
from typing import IO, Callable, Iterable, List, Literal, Mapping, Optional, Union
|
||||
@@ -170,8 +167,6 @@ class ComfyConsole(Console):
|
||||
time.monotonic.
|
||||
"""
|
||||
|
||||
_cleanup_registered = False
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
@@ -238,9 +233,6 @@ class ComfyConsole(Console):
|
||||
if log_renderer:
|
||||
self._log_render = log_renderer
|
||||
|
||||
# Register terminal cleanup handlers
|
||||
self._register_cleanup()
|
||||
|
||||
def status(
|
||||
self,
|
||||
status: RenderableType,
|
||||
@@ -291,38 +283,6 @@ class ComfyConsole(Console):
|
||||
|
||||
return status_renderable
|
||||
|
||||
def _register_cleanup(self):
|
||||
"""Register terminal cleanup handlers."""
|
||||
if not ComfyConsole._cleanup_registered:
|
||||
ComfyConsole._cleanup_registered = True
|
||||
|
||||
# Register cleanup on normal exit
|
||||
atexit.register(self._cleanup_terminal)
|
||||
|
||||
# Register cleanup on signals
|
||||
signal.signal(signal.SIGINT, self._signal_handler)
|
||||
signal.signal(signal.SIGTERM, self._signal_handler)
|
||||
|
||||
def _cleanup_terminal(self):
|
||||
"""Restore terminal to a clean state."""
|
||||
try:
|
||||
# Show cursor using ANSI escape codes
|
||||
sys.stdout.write("\x1b[?25h") # Show cursor
|
||||
sys.stdout.write("\x1b[0m") # Reset attributes
|
||||
sys.stdout.flush()
|
||||
|
||||
# Also use Rich's method
|
||||
self.show_cursor(True)
|
||||
except Exception:
|
||||
# Silently fail if cleanup fails
|
||||
pass
|
||||
|
||||
def _signal_handler(self, signum, frame):
|
||||
"""Handle signals with cleanup."""
|
||||
self._cleanup_terminal()
|
||||
# Exit after cleanup
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
catppuccin_mocha = {
|
||||
# Colors based on "CatppuccinMocha" from Gogh themes
|
||||
|
||||
@@ -39,17 +39,23 @@ class PlayReady:
|
||||
if not isinstance(pssh, PSSH):
|
||||
raise TypeError(f"Expected pssh to be a {PSSH}, not {pssh!r}")
|
||||
|
||||
kids: list[UUID] = []
|
||||
for header in pssh.wrm_headers:
|
||||
try:
|
||||
signed_ids, _, _, _ = header.read_attributes()
|
||||
except Exception:
|
||||
continue
|
||||
for signed_id in signed_ids:
|
||||
if pssh_b64:
|
||||
kids = self._extract_kids_from_pssh_b64(pssh_b64)
|
||||
else:
|
||||
kids = []
|
||||
|
||||
# Extract KIDs using pyplayready's method (may miss some KIDs)
|
||||
if not kids:
|
||||
for header in pssh.wrm_headers:
|
||||
try:
|
||||
kids.append(UUID(bytes_le=base64.b64decode(signed_id.value)))
|
||||
signed_ids, _, _, _ = header.read_attributes()
|
||||
except Exception:
|
||||
continue
|
||||
for signed_id in signed_ids:
|
||||
try:
|
||||
kids.append(UUID(bytes_le=base64.b64decode(signed_id.value)))
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if kid:
|
||||
if isinstance(kid, str):
|
||||
@@ -72,6 +78,66 @@ class PlayReady:
|
||||
if pssh_b64:
|
||||
self.data.setdefault("pssh_b64", pssh_b64)
|
||||
|
||||
def _extract_kids_from_pssh_b64(self, pssh_b64: str) -> list[UUID]:
|
||||
"""Extract all KIDs from base64-encoded PSSH data."""
|
||||
try:
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
# Decode the PSSH
|
||||
pssh_bytes = base64.b64decode(pssh_b64)
|
||||
|
||||
# Try to find XML in the PSSH data
|
||||
# PlayReady PSSH usually has XML embedded in it
|
||||
pssh_str = pssh_bytes.decode("utf-16le", errors="ignore")
|
||||
|
||||
# Find WRMHEADER
|
||||
xml_start = pssh_str.find("<WRMHEADER")
|
||||
if xml_start == -1:
|
||||
# Try UTF-8
|
||||
pssh_str = pssh_bytes.decode("utf-8", errors="ignore")
|
||||
xml_start = pssh_str.find("<WRMHEADER")
|
||||
|
||||
if xml_start != -1:
|
||||
clean_xml = pssh_str[xml_start:]
|
||||
xml_end = clean_xml.find("</WRMHEADER>") + len("</WRMHEADER>")
|
||||
clean_xml = clean_xml[:xml_end]
|
||||
|
||||
root = ET.fromstring(clean_xml)
|
||||
ns = {"pr": "http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader"}
|
||||
|
||||
kids = []
|
||||
|
||||
# Extract from CUSTOMATTRIBUTES/KIDS
|
||||
kid_elements = root.findall(".//pr:CUSTOMATTRIBUTES/pr:KIDS/pr:KID", ns)
|
||||
for kid_elem in kid_elements:
|
||||
value = kid_elem.get("VALUE")
|
||||
if value:
|
||||
try:
|
||||
kid_bytes = base64.b64decode(value + "==")
|
||||
kid_uuid = UUID(bytes_le=kid_bytes)
|
||||
kids.append(kid_uuid)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Also get individual KID
|
||||
individual_kids = root.findall(".//pr:DATA/pr:KID", ns)
|
||||
for kid_elem in individual_kids:
|
||||
if kid_elem.text:
|
||||
try:
|
||||
kid_bytes = base64.b64decode(kid_elem.text.strip() + "==")
|
||||
kid_uuid = UUID(bytes_le=kid_bytes)
|
||||
if kid_uuid not in kids:
|
||||
kids.append(kid_uuid)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return kids
|
||||
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return []
|
||||
|
||||
@classmethod
|
||||
def from_track(cls, track: AnyTrack, session: Optional[Session] = None) -> PlayReady:
|
||||
if not session:
|
||||
|
||||
@@ -170,8 +170,9 @@ class Episode(Title):
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
if (primary_video_track.hdr_format_commercial) != "Dolby Vision":
|
||||
name += f" DV {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
name += " DV"
|
||||
if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
|
||||
name += " HDR"
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
@@ -201,9 +202,10 @@ class Series(SortedKeyList, ABC):
|
||||
def tree(self, verbose: bool = False) -> Tree:
|
||||
seasons = Counter(x.season for x in self)
|
||||
num_seasons = len(seasons)
|
||||
num_episodes = sum(seasons.values())
|
||||
sum(seasons.values())
|
||||
season_breakdown = ", ".join(f"S{season}({count})" for season, count in sorted(seasons.items()))
|
||||
tree = Tree(
|
||||
f"{num_seasons} Season{['s', ''][num_seasons == 1]}, {num_episodes} Episode{['s', ''][num_episodes == 1]}",
|
||||
f"{num_seasons} seasons, {season_breakdown}",
|
||||
guide_style="bright_black",
|
||||
)
|
||||
if verbose:
|
||||
|
||||
@@ -121,8 +121,9 @@ class Movie(Title):
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
if (primary_video_track.hdr_format_commercial) != "Dolby Vision":
|
||||
name += f" DV {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
name += " DV"
|
||||
if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
|
||||
name += " HDR"
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
|
||||
@@ -126,38 +126,40 @@ class Hybrid:
|
||||
def extract_stream(self, save_path, type_):
|
||||
output = Path(config.directories.temp / f"{type_}.hevc")
|
||||
|
||||
self.log.info(f"+ Extracting {type_} stream")
|
||||
|
||||
returncode = self.ffmpeg_simple(save_path, output)
|
||||
with console.status(f"Extracting {type_} stream...", spinner="dots"):
|
||||
returncode = self.ffmpeg_simple(save_path, output)
|
||||
|
||||
if returncode:
|
||||
output.unlink(missing_ok=True)
|
||||
self.log.error(f"x Failed extracting {type_} stream")
|
||||
sys.exit(1)
|
||||
|
||||
self.log.info(f"Extracted {type_} stream")
|
||||
|
||||
def extract_rpu(self, video, untouched=False):
|
||||
if os.path.isfile(config.directories.temp / "RPU.bin") or os.path.isfile(
|
||||
config.directories.temp / "RPU_UNT.bin"
|
||||
):
|
||||
return
|
||||
|
||||
self.log.info(f"+ Extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
with console.status(
|
||||
f"Extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream...", spinner="dots"
|
||||
):
|
||||
extraction_args = [str(DoviTool)]
|
||||
if not untouched:
|
||||
extraction_args += ["-m", "3"]
|
||||
extraction_args += [
|
||||
"extract-rpu",
|
||||
config.directories.temp / "DV.hevc",
|
||||
"-o",
|
||||
config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin",
|
||||
]
|
||||
|
||||
extraction_args = [str(DoviTool)]
|
||||
if not untouched:
|
||||
extraction_args += ["-m", "3"]
|
||||
extraction_args += [
|
||||
"extract-rpu",
|
||||
config.directories.temp / "DV.hevc",
|
||||
"-o",
|
||||
config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin",
|
||||
]
|
||||
|
||||
rpu_extraction = subprocess.run(
|
||||
extraction_args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
rpu_extraction = subprocess.run(
|
||||
extraction_args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if rpu_extraction.returncode:
|
||||
Path.unlink(config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin")
|
||||
@@ -168,6 +170,8 @@ class Hybrid:
|
||||
else:
|
||||
raise ValueError(f"Failed extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
|
||||
self.log.info(f"Extracted{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
|
||||
def level_6(self):
|
||||
"""Edit RPU Level 6 values"""
|
||||
with open(config.directories.temp / "L6.json", "w+") as level6_file:
|
||||
@@ -185,26 +189,28 @@ class Hybrid:
|
||||
json.dump(level6, level6_file, indent=3)
|
||||
|
||||
if not os.path.isfile(config.directories.temp / "RPU_L6.bin"):
|
||||
self.log.info("+ Editing RPU Level 6 values")
|
||||
level6 = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"editor",
|
||||
"-i",
|
||||
config.directories.temp / self.rpu_file,
|
||||
"-j",
|
||||
config.directories.temp / "L6.json",
|
||||
"-o",
|
||||
config.directories.temp / "RPU_L6.bin",
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
with console.status("Editing RPU Level 6 values...", spinner="dots"):
|
||||
level6 = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"editor",
|
||||
"-i",
|
||||
config.directories.temp / self.rpu_file,
|
||||
"-j",
|
||||
config.directories.temp / "L6.json",
|
||||
"-o",
|
||||
config.directories.temp / "RPU_L6.bin",
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if level6.returncode:
|
||||
Path.unlink(config.directories.temp / "RPU_L6.bin")
|
||||
raise ValueError("Failed editing RPU Level 6 values")
|
||||
|
||||
self.log.info("Edited RPU Level 6 values")
|
||||
|
||||
# Update rpu_file to use the edited version
|
||||
self.rpu_file = "RPU_L6.bin"
|
||||
|
||||
@@ -212,35 +218,36 @@ class Hybrid:
|
||||
if os.path.isfile(config.directories.temp / self.hevc_file):
|
||||
return
|
||||
|
||||
self.log.info(f"+ Injecting Dolby Vision metadata into {self.hdr_type} stream")
|
||||
with console.status(f"Injecting Dolby Vision metadata into {self.hdr_type} stream...", spinner="dots"):
|
||||
inject_cmd = [
|
||||
str(DoviTool),
|
||||
"inject-rpu",
|
||||
"-i",
|
||||
config.directories.temp / "HDR10.hevc",
|
||||
"--rpu-in",
|
||||
config.directories.temp / self.rpu_file,
|
||||
]
|
||||
|
||||
inject_cmd = [
|
||||
str(DoviTool),
|
||||
"inject-rpu",
|
||||
"-i",
|
||||
config.directories.temp / "HDR10.hevc",
|
||||
"--rpu-in",
|
||||
config.directories.temp / self.rpu_file,
|
||||
]
|
||||
# If we converted from HDR10+, optionally remove HDR10+ metadata during injection
|
||||
# Default to removing HDR10+ metadata since we're converting to DV
|
||||
if self.hdr10plus_to_dv:
|
||||
inject_cmd.append("--drop-hdr10plus")
|
||||
self.log.info(" - Removing HDR10+ metadata during injection")
|
||||
|
||||
# If we converted from HDR10+, optionally remove HDR10+ metadata during injection
|
||||
# Default to removing HDR10+ metadata since we're converting to DV
|
||||
if self.hdr10plus_to_dv:
|
||||
inject_cmd.append("--drop-hdr10plus")
|
||||
self.log.info(" - Removing HDR10+ metadata during injection")
|
||||
inject_cmd.extend(["-o", config.directories.temp / self.hevc_file])
|
||||
|
||||
inject_cmd.extend(["-o", config.directories.temp / self.hevc_file])
|
||||
|
||||
inject = subprocess.run(
|
||||
inject_cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
inject = subprocess.run(
|
||||
inject_cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if inject.returncode:
|
||||
Path.unlink(config.directories.temp / self.hevc_file)
|
||||
raise ValueError("Failed injecting Dolby Vision metadata into HDR10 stream")
|
||||
|
||||
self.log.info(f"Injected Dolby Vision metadata into {self.hdr_type} stream")
|
||||
|
||||
def extract_hdr10plus(self, _video):
|
||||
"""Extract HDR10+ metadata from the video stream"""
|
||||
if os.path.isfile(config.directories.temp / self.hdr10plus_file):
|
||||
@@ -249,20 +256,19 @@ class Hybrid:
|
||||
if not HDR10PlusTool:
|
||||
raise ValueError("HDR10Plus_tool not found. Please install it to use HDR10+ to DV conversion.")
|
||||
|
||||
self.log.info("+ Extracting HDR10+ metadata")
|
||||
|
||||
# HDR10Plus_tool needs raw HEVC stream
|
||||
extraction = subprocess.run(
|
||||
[
|
||||
str(HDR10PlusTool),
|
||||
"extract",
|
||||
str(config.directories.temp / "HDR10.hevc"),
|
||||
"-o",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
with console.status("Extracting HDR10+ metadata...", spinner="dots"):
|
||||
# HDR10Plus_tool needs raw HEVC stream
|
||||
extraction = subprocess.run(
|
||||
[
|
||||
str(HDR10PlusTool),
|
||||
"extract",
|
||||
str(config.directories.temp / "HDR10.hevc"),
|
||||
"-o",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if extraction.returncode:
|
||||
raise ValueError("Failed extracting HDR10+ metadata")
|
||||
@@ -271,47 +277,49 @@ class Hybrid:
|
||||
if os.path.getsize(config.directories.temp / self.hdr10plus_file) == 0:
|
||||
raise ValueError("No HDR10+ metadata found in the stream")
|
||||
|
||||
self.log.info("Extracted HDR10+ metadata")
|
||||
|
||||
def convert_hdr10plus_to_dv(self):
|
||||
"""Convert HDR10+ metadata to Dolby Vision RPU"""
|
||||
if os.path.isfile(config.directories.temp / "RPU.bin"):
|
||||
return
|
||||
|
||||
self.log.info("+ Converting HDR10+ metadata to Dolby Vision")
|
||||
with console.status("Converting HDR10+ metadata to Dolby Vision...", spinner="dots"):
|
||||
# First create the extra metadata JSON for dovi_tool
|
||||
extra_metadata = {
|
||||
"cm_version": "V29",
|
||||
"length": 0, # dovi_tool will figure this out
|
||||
"level6": {
|
||||
"max_display_mastering_luminance": 1000,
|
||||
"min_display_mastering_luminance": 1,
|
||||
"max_content_light_level": 0,
|
||||
"max_frame_average_light_level": 0,
|
||||
},
|
||||
}
|
||||
|
||||
# First create the extra metadata JSON for dovi_tool
|
||||
extra_metadata = {
|
||||
"cm_version": "V29",
|
||||
"length": 0, # dovi_tool will figure this out
|
||||
"level6": {
|
||||
"max_display_mastering_luminance": 1000,
|
||||
"min_display_mastering_luminance": 1,
|
||||
"max_content_light_level": 0,
|
||||
"max_frame_average_light_level": 0,
|
||||
},
|
||||
}
|
||||
with open(config.directories.temp / "extra.json", "w") as f:
|
||||
json.dump(extra_metadata, f, indent=2)
|
||||
|
||||
with open(config.directories.temp / "extra.json", "w") as f:
|
||||
json.dump(extra_metadata, f, indent=2)
|
||||
|
||||
# Generate DV RPU from HDR10+ metadata
|
||||
conversion = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"generate",
|
||||
"-j",
|
||||
str(config.directories.temp / "extra.json"),
|
||||
"--hdr10plus-json",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
"-o",
|
||||
str(config.directories.temp / "RPU.bin"),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
# Generate DV RPU from HDR10+ metadata
|
||||
conversion = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"generate",
|
||||
"-j",
|
||||
str(config.directories.temp / "extra.json"),
|
||||
"--hdr10plus-json",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
"-o",
|
||||
str(config.directories.temp / "RPU.bin"),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if conversion.returncode:
|
||||
raise ValueError("Failed converting HDR10+ to Dolby Vision")
|
||||
|
||||
self.log.info("Converted HDR10+ metadata to Dolby Vision")
|
||||
self.log.info("✓ HDR10+ successfully converted to Dolby Vision Profile 8")
|
||||
|
||||
# Clean up temporary files
|
||||
|
||||
@@ -233,6 +233,7 @@ class Subtitle(Track):
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except pycaption.exceptions.CaptionReadSyntaxError:
|
||||
@@ -241,6 +242,7 @@ class Subtitle(Track):
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except Exception:
|
||||
@@ -444,6 +446,8 @@ class Subtitle(Track):
|
||||
|
||||
caption_set = self.parse(self.path.read_bytes(), self.codec)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
if codec == Subtitle.Codec.WebVTT:
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = writer().write(caption_set)
|
||||
|
||||
output_path.write_text(subtitle_text, encoding="utf8")
|
||||
@@ -520,6 +524,8 @@ class Subtitle(Track):
|
||||
|
||||
caption_set = self.parse(self.path.read_bytes(), self.codec)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
if codec == Subtitle.Codec.WebVTT:
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = writer().write(caption_set)
|
||||
|
||||
output_path.write_text(subtitle_text, encoding="utf8")
|
||||
@@ -681,6 +687,24 @@ class Subtitle(Track):
|
||||
if merged_captions:
|
||||
caption_set.set_captions(lang, merged_captions)
|
||||
|
||||
@staticmethod
|
||||
def filter_unwanted_cues(caption_set: pycaption.CaptionSet):
|
||||
"""
|
||||
Filter out subtitle cues containing only or whitespace.
|
||||
"""
|
||||
for lang in caption_set.get_languages():
|
||||
captions = caption_set.get_captions(lang)
|
||||
filtered_captions = pycaption.CaptionList()
|
||||
|
||||
for caption in captions:
|
||||
text = caption.get_text().strip()
|
||||
if not text or text == " " or all(c in " \t\n\r\xa0" for c in text.replace(" ", "\xa0")):
|
||||
continue
|
||||
|
||||
filtered_captions.append(caption)
|
||||
|
||||
caption_set.set_captions(lang, filtered_captions)
|
||||
|
||||
@staticmethod
|
||||
def merge_segmented_wvtt(data: bytes, period_start: float = 0.0) -> tuple[CaptionList, Optional[str]]:
|
||||
"""
|
||||
|
||||
@@ -355,6 +355,14 @@ class Tracks:
|
||||
]
|
||||
)
|
||||
|
||||
if hasattr(vt, "range") and vt.range == Video.Range.HLG:
|
||||
video_args.extend(
|
||||
[
|
||||
"--color-transfer-characteristics",
|
||||
"0:18", # ARIB STD-B67 (HLG)
|
||||
]
|
||||
)
|
||||
|
||||
cl.extend(video_args + ["(", str(vt.path), ")"])
|
||||
|
||||
for i, at in enumerate(self.audio):
|
||||
|
||||
@@ -10,11 +10,22 @@ import requests
|
||||
|
||||
|
||||
class UpdateChecker:
|
||||
"""Check for available updates from the GitHub repository."""
|
||||
"""
|
||||
Check for available updates from the GitHub repository.
|
||||
|
||||
This class provides functionality to check for newer versions of the application
|
||||
by querying the GitHub releases API. It includes rate limiting, caching, and
|
||||
both synchronous and asynchronous interfaces.
|
||||
|
||||
Attributes:
|
||||
REPO_URL: GitHub API URL for latest release
|
||||
TIMEOUT: Request timeout in seconds
|
||||
DEFAULT_CHECK_INTERVAL: Default time between checks in seconds (24 hours)
|
||||
"""
|
||||
|
||||
REPO_URL = "https://api.github.com/repos/unshackle-dl/unshackle/releases/latest"
|
||||
TIMEOUT = 5
|
||||
DEFAULT_CHECK_INTERVAL = 24 * 60 * 60 # 24 hours in seconds
|
||||
DEFAULT_CHECK_INTERVAL = 24 * 60 * 60
|
||||
|
||||
@classmethod
|
||||
def _get_cache_file(cls) -> Path:
|
||||
@@ -23,6 +34,86 @@ class UpdateChecker:
|
||||
|
||||
return config.directories.cache / "update_check.json"
|
||||
|
||||
@classmethod
|
||||
def _load_cache_data(cls) -> dict:
|
||||
"""
|
||||
Load cache data from file.
|
||||
|
||||
Returns:
|
||||
Cache data dictionary or empty dict if loading fails
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
|
||||
if not cache_file.exists():
|
||||
return {}
|
||||
|
||||
try:
|
||||
with open(cache_file, "r") as f:
|
||||
return json.load(f)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def _parse_version(version_string: str) -> str:
|
||||
"""
|
||||
Parse and normalize version string by removing 'v' prefix.
|
||||
|
||||
Args:
|
||||
version_string: Raw version string from API
|
||||
|
||||
Returns:
|
||||
Cleaned version string
|
||||
"""
|
||||
return version_string.lstrip("v")
|
||||
|
||||
@staticmethod
|
||||
def _is_valid_version(version: str) -> bool:
|
||||
"""
|
||||
Validate version string format.
|
||||
|
||||
Args:
|
||||
version: Version string to validate
|
||||
|
||||
Returns:
|
||||
True if version string is valid semantic version, False otherwise
|
||||
"""
|
||||
if not version or not isinstance(version, str):
|
||||
return False
|
||||
|
||||
try:
|
||||
parts = version.split(".")
|
||||
if len(parts) < 2:
|
||||
return False
|
||||
|
||||
for part in parts:
|
||||
int(part)
|
||||
|
||||
return True
|
||||
except (ValueError, AttributeError):
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def _fetch_latest_version(cls) -> Optional[str]:
|
||||
"""
|
||||
Fetch the latest version from GitHub API.
|
||||
|
||||
Returns:
|
||||
Latest version string if successful, None otherwise
|
||||
"""
|
||||
try:
|
||||
response = requests.get(cls.REPO_URL, timeout=cls.TIMEOUT)
|
||||
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
|
||||
data = response.json()
|
||||
latest_version = cls._parse_version(data.get("tag_name", ""))
|
||||
|
||||
return latest_version if cls._is_valid_version(latest_version) else None
|
||||
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _should_check_for_updates(cls, check_interval: int = DEFAULT_CHECK_INTERVAL) -> bool:
|
||||
"""
|
||||
@@ -34,45 +125,40 @@ class UpdateChecker:
|
||||
Returns:
|
||||
True if we should check for updates, False otherwise
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
cache_data = cls._load_cache_data()
|
||||
|
||||
if not cache_file.exists():
|
||||
if not cache_data:
|
||||
return True
|
||||
|
||||
try:
|
||||
with open(cache_file, "r") as f:
|
||||
cache_data = json.load(f)
|
||||
last_check = cache_data.get("last_check", 0)
|
||||
current_time = time.time()
|
||||
|
||||
last_check = cache_data.get("last_check", 0)
|
||||
current_time = time.time()
|
||||
|
||||
return (current_time - last_check) >= check_interval
|
||||
|
||||
except (json.JSONDecodeError, KeyError, OSError):
|
||||
# If cache is corrupted or unreadable, allow check
|
||||
return True
|
||||
return (current_time - last_check) >= check_interval
|
||||
|
||||
@classmethod
|
||||
def _update_cache(cls, latest_version: Optional[str] = None) -> None:
|
||||
def _update_cache(cls, latest_version: Optional[str] = None, current_version: Optional[str] = None) -> None:
|
||||
"""
|
||||
Update the cache file with the current timestamp and latest version.
|
||||
Update the cache file with the current timestamp and version info.
|
||||
|
||||
Args:
|
||||
latest_version: The latest version found, if any
|
||||
current_version: The current version being used
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
|
||||
try:
|
||||
# Ensure cache directory exists
|
||||
cache_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
cache_data = {"last_check": time.time(), "latest_version": latest_version}
|
||||
cache_data = {
|
||||
"last_check": time.time(),
|
||||
"latest_version": latest_version,
|
||||
"current_version": current_version,
|
||||
}
|
||||
|
||||
with open(cache_file, "w") as f:
|
||||
json.dump(cache_data, f)
|
||||
json.dump(cache_data, f, indent=2)
|
||||
|
||||
except (OSError, json.JSONEncodeError):
|
||||
# Silently fail if we can't write cache
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
@@ -87,6 +173,9 @@ class UpdateChecker:
|
||||
Returns:
|
||||
True if latest > current, False otherwise
|
||||
"""
|
||||
if not UpdateChecker._is_valid_version(current) or not UpdateChecker._is_valid_version(latest):
|
||||
return False
|
||||
|
||||
try:
|
||||
current_parts = [int(x) for x in current.split(".")]
|
||||
latest_parts = [int(x) for x in latest.split(".")]
|
||||
@@ -116,20 +205,14 @@ class UpdateChecker:
|
||||
Returns:
|
||||
The latest version string if an update is available, None otherwise
|
||||
"""
|
||||
if not cls._is_valid_version(current_version):
|
||||
return None
|
||||
|
||||
try:
|
||||
loop = asyncio.get_event_loop()
|
||||
response = await loop.run_in_executor(None, lambda: requests.get(cls.REPO_URL, timeout=cls.TIMEOUT))
|
||||
latest_version = await loop.run_in_executor(None, cls._fetch_latest_version)
|
||||
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
|
||||
data = response.json()
|
||||
latest_version = data.get("tag_name", "").lstrip("v")
|
||||
|
||||
if not latest_version:
|
||||
return None
|
||||
|
||||
if cls._compare_versions(current_version, latest_version):
|
||||
if latest_version and cls._compare_versions(current_version, latest_version):
|
||||
return latest_version
|
||||
|
||||
except Exception:
|
||||
@@ -137,6 +220,31 @@ class UpdateChecker:
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _get_cached_update_info(cls, current_version: str) -> Optional[str]:
|
||||
"""
|
||||
Check if there's a cached update available for the current version.
|
||||
|
||||
Args:
|
||||
current_version: The current version string
|
||||
|
||||
Returns:
|
||||
The latest version string if an update is available from cache, None otherwise
|
||||
"""
|
||||
cache_data = cls._load_cache_data()
|
||||
|
||||
if not cache_data:
|
||||
return None
|
||||
|
||||
cached_current = cache_data.get("current_version")
|
||||
cached_latest = cache_data.get("latest_version")
|
||||
|
||||
if cached_current == current_version and cached_latest:
|
||||
if cls._compare_versions(current_version, cached_latest):
|
||||
return cached_latest
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def check_for_updates_sync(cls, current_version: str, check_interval: Optional[int] = None) -> Optional[str]:
|
||||
"""
|
||||
@@ -149,40 +257,20 @@ class UpdateChecker:
|
||||
Returns:
|
||||
The latest version string if an update is available, None otherwise
|
||||
"""
|
||||
# Use config value if not specified
|
||||
if not cls._is_valid_version(current_version):
|
||||
return None
|
||||
|
||||
if check_interval is None:
|
||||
from unshackle.core.config import config
|
||||
|
||||
check_interval = config.update_check_interval * 60 * 60 # Convert hours to seconds
|
||||
check_interval = config.update_check_interval * 60 * 60
|
||||
|
||||
# Check if we should skip this check due to rate limiting
|
||||
if not cls._should_check_for_updates(check_interval):
|
||||
return None
|
||||
return cls._get_cached_update_info(current_version)
|
||||
|
||||
try:
|
||||
response = requests.get(cls.REPO_URL, timeout=cls.TIMEOUT)
|
||||
|
||||
if response.status_code != 200:
|
||||
# Update cache even on failure to prevent rapid retries
|
||||
cls._update_cache()
|
||||
return None
|
||||
|
||||
data = response.json()
|
||||
latest_version = data.get("tag_name", "").lstrip("v")
|
||||
|
||||
if not latest_version:
|
||||
cls._update_cache()
|
||||
return None
|
||||
|
||||
# Update cache with the latest version info
|
||||
cls._update_cache(latest_version)
|
||||
|
||||
if cls._compare_versions(current_version, latest_version):
|
||||
return latest_version
|
||||
|
||||
except Exception:
|
||||
# Update cache even on exception to prevent rapid retries
|
||||
cls._update_cache()
|
||||
pass
|
||||
latest_version = cls._fetch_latest_version()
|
||||
cls._update_cache(latest_version, current_version)
|
||||
if latest_version and cls._compare_versions(current_version, latest_version):
|
||||
return latest_version
|
||||
|
||||
return None
|
||||
|
||||
@@ -33,6 +33,7 @@ class EXAMPLE(Service):
|
||||
|
||||
TITLE_RE = r"^(?:https?://?domain\.com/details/)?(?P<title_id>[^/]+)"
|
||||
GEOFENCE = ("US", "UK")
|
||||
NO_SUBTITLES = True
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="EXAMPLE", short_help="https://domain.com")
|
||||
|
||||
Reference in New Issue
Block a user