Complete guide for using the yt-dlp-web-ui video downloading service on Haiven.
The Video Downloader is a production-grade service built on yt-dlp-web-ui that provides:
/mnt/nas1/media/videos/ytdlpcurl -X POST "https://downloader.haiven.local/api/v1/exec" \
-H "Content-Type: application/json" \
-d '{
"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"path": "/downloads"
}'
The web interface consists of:
/downloads (video) or /audio (audio files)When enabled in settings, you can choose specific video quality:
Note: For best quality without manual selection, leave format selection disabled and use the default settings.
For quick audio extraction:
/audioOutput will be extracted audio in best quality (typically M4A or OPUS).
Best Quality (Default):
The service is pre-configured to download the best available quality automatically.
Specific Resolution:
Use the format selection feature or add custom arguments:
# 1080p maximum
-f "bv*[height<=1080]+ba/best[height<=1080]"
# 720p maximum
-f "bv*[height<=720]+ba/best[height<=720]"
# 4K maximum
-f "bv*[height<=2160]+ba/best[height<=2160]"
Container Format:
Force specific output format:
--merge-output-format mp4 # MP4 container
--merge-output-format mkv # MKV container
--merge-output-format webm # WebM container
Video Formats:
- MP4 (default, best compatibility)
- WebM (smaller size, web-optimized)
- MKV (supports all codecs)
Audio Formats:
# MP3 (best compatibility)
-x --audio-format mp3 --audio-quality 0
# FLAC (lossless)
-x --audio-format flac
# M4A (high quality, small size)
-x --audio-format m4a --audio-quality 0
# OPUS (best quality/size ratio)
-x --audio-format opus --audio-quality 0
Auto-Download All Subtitles:
--write-subs --write-auto-subs --sub-lang en
Embed Subtitles in Video:
--write-subs --embed-subs --sub-lang en
Download Separate Subtitle Files:
--write-subs --sub-lang en,es,fr --skip-download
Available Languages:
Replace language codes as needed: en (English), es (Spanish), fr (French), de (German), ja (Japanese), etc.
The following preset scripts are available in the /mnt/apps/docker/utils/video-downloader/scripts/ directory for common download scenarios.
Note: These scripts are defined in the ROADMAP.md (Priority 2) but not yet implemented. Once created, they will provide convenient shortcuts for common operations.
Download video in maximum quality with MP4 output.
./scripts/download-best.sh "https://www.youtube.com/watch?v=VIDEO_ID"
What it does:
- Selects best video + best audio
- Merges into MP4 container
- Embeds metadata and thumbnail
Extract audio to MP3 format.
./scripts/download-audio-mp3.sh "https://www.youtube.com/watch?v=VIDEO_ID"
What it does:
- Extracts audio only
- Converts to MP3 (320kbps)
- Saves to /mnt/nas1/media/audio/ytdlp
- Embeds cover art and metadata
Extract audio to FLAC (lossless) format.
./scripts/download-audio-flac.sh "https://www.youtube.com/watch?v=VIDEO_ID"
What it does:
- Extracts audio only
- Converts to FLAC (lossless)
- Saves to /mnt/nas1/media/audio/ytdlp
Download video with embedded subtitles.
./scripts/download-with-subs.sh "https://www.youtube.com/watch?v=VIDEO_ID"
What it does:
- Downloads video + audio
- Downloads English subtitles
- Embeds subtitles in video file
- Falls back to auto-generated if manual subs unavailable
Download entire channel with archive tracking.
./scripts/archive-channel.sh "https://www.youtube.com/@channelname"
What it does:
- Downloads all videos from channel
- Tracks downloaded videos in archive.txt
- Skips previously downloaded videos on re-run
- Saves info JSON and thumbnails
Download entire playlist.
./scripts/download-playlist.sh "https://www.youtube.com/playlist?list=PLAYLIST_ID"
What it does:
- Downloads all videos in playlist order
- Maintains playlist structure in filenames
- Uses archive.txt to prevent duplicates
Until scripts are implemented, use the API directly:
# Best quality
curl -X POST "https://downloader.haiven.local/api/v1/exec" \
-H "Content-Type: application/json" \
-d '{
"url": "YOUR_URL",
"path": "/downloads",
"args": "-f \"bv*+ba/best\" --merge-output-format mp4"
}'
# Audio MP3
curl -X POST "https://downloader.haiven.local/api/v1/exec" \
-H "Content-Type: application/json" \
-d '{
"url": "YOUR_URL",
"path": "/audio",
"args": "-x --audio-format mp3 --audio-quality 0"
}'
Cookies are required to download:
- Age-restricted content
- Private or unlisted videos
- Members-only content
- Premium/subscriber content
- Some region-locked content
Using "Get cookies.txt LOCALLY" Extension:
Install the extension
- Chrome/Edge: Get cookies.txt LOCALLY
- Firefox: cookies.txt
Log in to the site (e.g., YouTube, Vimeo) in your browser
Export cookies
- Click the extension icon
- Select "Export" or "Download"
- Save the file as cookies.txt
Copy to service
```bash
# Create cookies directory if needed
mkdir -p /mnt/apps/docker/utils/video-downloader/cookies
# Copy your cookies file
cp ~/Downloads/cookies.txt /mnt/apps/docker/utils/video-downloader/cookies/youtube.txt
# Set permissions
chmod 644 /mnt/apps/docker/utils/video-downloader/cookies/youtube.txt
```
Web UI:
Add to the custom arguments field:
--cookies /cookies/youtube.txt
API:
curl -X POST "https://downloader.haiven.local/api/v1/exec" \
-H "Content-Type: application/json" \
-d '{
"url": "https://www.youtube.com/watch?v=PRIVATE_VIDEO",
"path": "/downloads",
"args": "--cookies /cookies/youtube.txt"
}'
Recommended structure:
/mnt/apps/docker/utils/video-downloader/cookies/
├── youtube.txt # YouTube and YouTube Music
├── twitter.txt # Twitter/X
├── instagram.txt # Instagram
├── vimeo.txt # Vimeo
└── patreon.txt # Patreon
Security Note: Cookie files contain authentication tokens. Protect them with appropriate permissions (644) and never share publicly.
Automatically download new videos from subscribed channels.
Status: Planned feature (see ROADMAP.md Priority 5). Not yet implemented.
Create subscription list
bash
# Edit subscription file
nano /mnt/apps/docker/utils/video-downloader/config/subscriptions.txt
Add channel URLs (one per line)
https://www.youtube.com/@channel1
https://www.youtube.com/@channel2
https://www.youtube.com/@channel3
Automated daily check
- Cron job runs daily at 3 AM
- Downloads latest 5 videos from each channel
- Uses archive.txt to skip already downloaded videos
- Logs results to /mnt/apps/docker/utils/video-downloader/logs/subscriptions.log
Until automation is implemented, download channels manually with archive tracking:
curl -X POST "https://downloader.haiven.local/api/v1/exec" \
-H "Content-Type: application/json" \
-d '{
"url": "https://www.youtube.com/@channelname",
"path": "/downloads",
"args": "--download-archive /downloads/archive.txt --playlist-end 5"
}'
The --download-archive flag maintains a list of downloaded video IDs, preventing re-downloads on subsequent runs.
The Video Downloader provides a REST API for automation and integration.
Base URL: https://downloader.haiven.local
Documentation: https://downloader.haiven.local/openapi
Start a new download.
Request Body:
{
"url": "https://www.youtube.com/watch?v=VIDEO_ID",
"path": "/downloads",
"rename": "",
"args": ""
}
Parameters:
- url (required): Video URL to download
- path (required): Output path (/downloads or /audio)
- rename (optional): Custom filename (without extension)
- args (optional): Additional yt-dlp arguments
Response:
{
"id": "unique-download-id",
"status": "queued",
"url": "https://www.youtube.com/watch?v=VIDEO_ID"
}
Example:
curl -X POST "https://downloader.haiven.local/api/v1/exec" \
-H "Content-Type: application/json" \
-d '{
"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"path": "/downloads",
"args": "-f \"bv*[height<=1080]+ba\""
}'
Get list of currently running downloads.
Response:
[
{
"id": "download-id",
"url": "https://...",
"progress": 45.2,
"speed": "2.5 MiB/s",
"eta": "00:02:15"
}
]
Example:
curl https://downloader.haiven.local/api/v1/running | jq
Get list of completed downloads.
Response:
[
{
"id": "download-id",
"url": "https://...",
"filename": "video-title.mp4",
"status": "completed",
"timestamp": "2025-12-06T10:30:00Z"
}
]
Example:
curl https://downloader.haiven.local/api/v1/completed | jq
Get current queue status and active downloads.
Response:
{
"queue_size": 4,
"active": 2,
"queued": 3,
"downloads": [...]
}
Example:
curl https://downloader.haiven.local/api/v1/active | jq
Add these to the args field for advanced control:
| Argument | Purpose | Example |
|---|---|---|
-f |
Format selection | -f "bv*[height<=1080]+ba/best" |
-x |
Extract audio only | -x --audio-format mp3 |
--audio-quality |
Audio quality (0=best, 9=worst) | --audio-quality 0 |
--write-subs |
Download subtitles | --write-subs --sub-lang en |
--embed-subs |
Embed subs in video | --write-subs --embed-subs |
--cookies |
Use cookie file | --cookies /cookies/youtube.txt |
--download-archive |
Track downloaded IDs | --download-archive /downloads/archive.txt |
--playlist-end |
Limit playlist items | --playlist-end 10 |
--playlist-start |
Start from item N | --playlist-start 5 |
--date-before |
Only videos before date | --date-before 20231231 |
--date-after |
Only videos after date | --date-after 20230101 |
--merge-output-format |
Output container | --merge-output-format mp4 |
All downloads automatically include these arguments (set via YTDLP_ARGS in docker-compose.yml):
--no-check-certificate # Bypass SSL verification
--retries 10 # Retry failed downloads 10 times
--fragment-retries 10 # Retry failed fragments 10 times
--retry-sleep 5 # Wait 5 seconds between retries
--file-access-retries 10 # Retry file access 10 times
--embed-metadata # Embed video metadata
--embed-thumbnail # Embed thumbnail as cover art
--restrict-filenames # Use ASCII-only safe filenames
--geo-bypass # Bypass geographic restrictions
Downloads are organized by content type:
| Content Type | Path | Purpose |
|---|---|---|
| Videos | /mnt/nas1/media/videos/ytdlp |
Downloaded video files (MP4, WebM, MKV) |
| Audio | /mnt/nas1/media/audio/ytdlp |
Extracted audio files (MP3, FLAC, M4A) |
| Queue Database | /mnt/apps/docker/utils/video-downloader/data |
Download queue persistence |
| Cookies | /mnt/apps/docker/utils/video-downloader/cookies |
Browser cookies for auth |
| Logs | /mnt/apps/docker/utils/video-downloader/logs |
Application and download logs |
| Config | /mnt/apps/docker/utils/video-downloader/config |
Custom configuration files |
From Host System:
# List video downloads
ls -lh /mnt/nas1/media/videos/ytdlp
# List audio downloads
ls -lh /mnt/nas1/media/audio/ytdlp
# Find recent downloads (last 24 hours)
find /mnt/nas1/media/videos/ytdlp -type f -mtime -1
From Jellyfin/Plex:
Add the ytdlp directories to your media library scan paths.
Check disk usage:
du -sh /mnt/nas1/media/videos/ytdlp
du -sh /mnt/nas1/media/audio/ytdlp
Archive old downloads:
# Move files older than 30 days to archive
find /mnt/nas1/media/videos/ytdlp -type f -mtime +30 \
-exec mv {} /mnt/nas1/media/archive/ytdlp/ \;
Symptoms:
- docker ps -a shows video-downloader or category-scraper with status "Exited (128)"
- Service won't restart even with docker compose restart
- Error in logs about network namespace or container reference
Root Cause:
video-downloader and category-scraper use network_mode: "container:qbittorrent-vpn" to share the VPN container's network namespace. When the parent container (qbittorrent-vpn) is recreated or docker operations leave orphaned references, the child services receive SIGTERM and exit with code 128. They cannot restart because the container reference they depend on is stale.
How to Verify:
# Check service status - look for "Exited (128)"
docker ps -a | grep -E "video-downloader|category-scraper"
# Check if parent VPN container is running and healthy
docker ps | grep qbittorrent-vpn
# Check logs for network errors
docker logs video-downloader 2>&1 | tail -20
Fix:
# Step 1: Ensure the VPN parent container is running
docker ps | grep qbittorrent-vpn # Must show "Up" and "healthy"
# Step 2: Remove orphaned containers and recreate
cd /mnt/apps/docker/utils/video-downloader
docker compose rm -f video-downloader category-scraper
docker compose up -d video-downloader category-scraper
# Step 3: Verify services are running
docker ps | grep -E "video-downloader|category-scraper"
Prevention:
- Always ensure qbittorrent-vpn is healthy before starting these services
- When doing docker operations on the media stack, restart dependent services afterward
- Use docker compose down && docker compose up -d instead of individual container operations
Symptoms:
- video-downloader and category-scraper fail to start
- Error: "container not running" or "network namespace not found"
Root Cause:
These services depend on qbittorrent-vpn (defined in /mnt/apps/docker/media/docker-compose.yml) for their network. The VPN container must be running first.
Fix:
# Start the VPN container first
cd /mnt/apps/docker/media
docker compose up -d qbittorrent-vpn
# Wait for it to become healthy
docker compose ps qbittorrent-vpn # Should show "healthy"
# Then start the dependent services
cd /mnt/apps/docker/utils/video-downloader
docker compose up -d
Symptoms:
- Error: "Video unavailable"
- Error: "Private video"
- Error: "This video is not available in your country"
Solutions:
Check if video is actually accessible:
- Open URL in your browser
- Verify it's not private, deleted, or region-locked
For age-restricted content:
- Export cookies from logged-in browser session
- Use --cookies /cookies/youtube.txt argument
For region-locked content:
- Geo-bypass is enabled by default
- If still blocked, video may require VPN or specific cookies
For private videos:
- Must use cookies from authenticated session
- Ensure you have permission to access the video
Symptoms:
- Error: "Sign in to confirm your age"
- Error: "This video requires authentication"
Solution:
Export cookies from your browser (see Cookie Authentication) and use:
--cookies /cookies/youtube.txt
Symptoms:
- Error: "Requested format is not available"
- Video downloads at wrong quality
Solutions:
Check available formats:
bash
# List all available formats
docker exec video-downloader yt-dlp -F "VIDEO_URL"
Use automatic best selection:
bash
-f "bv*+ba/best"
Specify fallback formats:
bash
-f "bv*[height<=1080]+ba/bv*+ba/best"
Symptoms:
- Download progress at 0% for extended period
- Very slow download speed
- Download appears frozen
Solutions:
Check running downloads:
bash
curl https://downloader.haiven.local/api/v1/running | jq
Check queue status:
- Web UI: View queue panel
- API: curl https://downloader.haiven.local/api/v1/active
Restart the service:
bash
cd /mnt/apps/docker/utils/video-downloader
docker compose restart
Check logs for errors:
bash
docker logs video-downloader --tail 100
Adjust queue size:
Edit docker-compose.yml and reduce QUEUE_SIZE from 4 to 2:
```yaml
environment:
Symptoms:
- Warning: "Failed to embed thumbnail"
- Audio files missing cover art
Cause:
FFmpeg may be unable to embed certain image formats.
Solution:
This is usually non-fatal. Files download successfully but lack embedded artwork. If critical, try:
--embed-thumbnail --convert-thumbnails jpg
View real-time logs:
docker logs -f video-downloader
View last 100 lines:
docker logs video-downloader --tail 100
Check specific download logs:
# Logs are in container logs
docker logs video-downloader | grep "VIDEO_ID"
Check health status:
docker inspect video-downloader | jq '.[0].State.Health'
yt-dlp is updated regularly to support site changes. Update between image releases:
# Update yt-dlp inside container
docker exec video-downloader pip install -U yt-dlp
# Verify new version
docker exec video-downloader yt-dlp --version
Note: Updates inside container are lost on container restart. For permanent updates, wait for new image release or rebuild with updated Dockerfile.
If experiencing issues, restart the service:
cd /mnt/apps/docker/utils/video-downloader
docker compose restart
For clean restart:
docker compose down
docker compose up -d
Slow downloads:
- Check network speed: speedtest-cli
- Reduce concurrent downloads: Edit QUEUE_SIZE in docker-compose.yml
- Check source server speed (some sites throttle downloads)
High CPU usage:
- Normal during format conversion (FFmpeg)
- If sustained, reduce QUEUE_SIZE
High memory usage:
- Normal for 4K video processing
- Reduce QUEUE_SIZE or add memory limits in docker-compose.yml
Use archive files for channels and playlists
- Add --download-archive /downloads/archive.txt to track downloaded videos
- Prevents re-downloading when checking for new content
- Essential for subscription-based workflows
Embed metadata for media library integration
- Enabled by default via --embed-metadata --embed-thumbnail
- Ensures Jellyfin/Plex can read title, artist, album art
- Critical for audio files in music libraries
Use restricted filenames for compatibility
- Enabled by default via --restrict-filenames
- Prevents issues with special characters on Windows/Mac
- Ensures files work across all platforms
Choose appropriate audio formats
- MP3: Best compatibility (all devices, all players)
- FLAC: Lossless archival quality (large files)
- M4A: High quality, smaller than FLAC, good compatibility
- OPUS: Best quality/size ratio, limited compatibility
Batch similar downloads
- Queue multiple videos at once
- Service processes concurrently (up to QUEUE_SIZE)
- More efficient than sequential single downloads
Use format codes instead of quality names
bash
# Instead of: "1080p"
# Use: -f "bv*[height<=1080]+ba"
Leverage geo-bypass for region locks
- Enabled by default
- Works for most geographic restrictions
- No VPN configuration needed
Monitor queue via API
- Integrate with automation tools (n8n, Flowise)
- Build dashboards with Homepage widget
- Alert on failed downloads
Forgetting cookie expiration
- Browser cookies expire (typically 1-6 months)
- Re-export cookies if getting authentication errors on previously working URLs
- Test cookie files periodically
Downloading entire channels without archive tracking
- Always use --download-archive for channels
- Without it, re-running downloads everything again
- Wastes bandwidth and storage
Not specifying output path
- Videos and audio mixed in same directory
- Use /downloads for video, /audio for audio-only
- Helps with media library organization
Ignoring playlist range limits
- Some playlists have 1000+ videos
- Use --playlist-end 50 to limit initial batch
- Expand range after verifying quality/content
Custom output templates:
-o "%(uploader)s/%(playlist)s/%(playlist_index)s-%(title)s.%(ext)s"
Filter by view count:
--match-filter "view_count > 10000"
Download only from specific date range:
--date-after 20240101 --date-before 20241231
Extract chapters as separate files:
--split-chapters
Download only audio streams (no conversion):
-f "ba" -o "%(title)s.%(ext)s"
Limit download speed:
--limit-rate 5M # 5 MB/s max
Jellyfin Media Library:
- Add /mnt/nas1/media/videos/ytdlp to video library paths
- Add /mnt/nas1/media/audio/ytdlp to music library paths
- Configure library to scan on schedule or trigger manually
Homepage Dashboard:
- Metrics available at /metrics endpoint
- Display active downloads count
- Show queue status
Prometheus Monitoring:
- Metrics scraped at port 3033
- Monitor download success/failure rates
- Track queue depth over time
N8n/Flowise Workflows:
- Use API to trigger downloads from workflows
- Parse completed downloads for metadata extraction
- Automate channel subscription checks
yt-dlp supports 1000+ sites. Common examples:
Full list: https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md
Q: Can I download private or age-restricted videos?
A: Yes, by using cookies from an authenticated browser session. See Cookie Authentication.
Q: How many concurrent downloads can I run?
A: Default is 4 concurrent downloads (configurable via QUEUE_SIZE in docker-compose.yml). Adjust based on bandwidth and CPU availability.
Q: What happens if a download fails?
A: The service automatically retries up to 10 times with 5-second delays. If still failing, check logs for specific error.
Q: Can I schedule recurring downloads?
A: Yes, via channel subscriptions (see ROADMAP.md Priority 5). Currently planned but not yet implemented.
Q: What's the best format for archival?
A: Use -f "bv*+ba" for video (gets best available) and -x --audio-format flac for lossless audio archival.
Q: How do I download a whole YouTube channel?
A: Use the channel URL with archive tracking:
curl -X POST "https://downloader.haiven.local/api/v1/exec" \
-d '{"url": "https://www.youtube.com/@channel", "path": "/downloads",
"args": "--download-archive /downloads/archive.txt"}'
Q: Can I limit the video quality to save bandwidth?
A: Yes, use format selection: -f "bv*[height<=720]+ba" for 720p maximum.
Q: How do I get only the audio from a video?
A: Use audio extraction: "path": "/audio", "args": "-x --audio-format mp3"
Q: Where are files stored?
A: Videos in /mnt/nas1/media/videos/ytdlp, audio in /mnt/nas1/media/audio/ytdlp.
Q: Can I use this from scripts or automation?
A: Yes! Use the REST API. See API Reference.
Q: Does this work with age-restricted content?
A: Yes, with cookies from a logged-in browser session.
Q: What's the archive.txt file for?
A: It tracks downloaded video IDs to prevent re-downloading the same video when updating a playlist or channel.
Q: How do I update yt-dlp?
A: Run docker exec video-downloader pip install -U yt-dlp or wait for the next image update.
Q: Can I download 4K videos?
A: Yes, if available. Use -f "bv*[height<=2160]+ba" to ensure 4K selection.
Last Updated: 2025-12-06
Service Version: yt-dlp-web-ui latest (marcobaobao/yt-dlp-webui)
For deployment and maintenance information, see README.md.