FileBrowser Upload Service

Web-based file management system for uploading AI models and files to the Haiven server.

Overview

The FileBrowser Upload Service provides a secure, web-based interface for uploading AI models and datasets to a staging area. This service implements a safety-first approach where uploads are staged in a separate directory before being manually promoted to production.

Key Features:
- Web UI for uploading large AI model files (GGUF, Stable Diffusion checkpoints, etc.)
- Staging area pattern prevents accidental deletion of production models
- Read-only access to production models for browsing and verification
- Internal-only access (not exposed externally)
- Support for large file uploads with extended timeouts

Service Details:
- Container Name: upload
- Domain: https://upload.haiven.local
- Image: filebrowser/filebrowser:v2.32.0-s6
- Networks: web, backend

Architecture

Volume Mounts

The service exposes three main mount points:

/srv/uploads/     /mnt/storage/uploads/   (read-write - staging area)
/srv/models/      /mnt/models/            (read-only - production models)
/srv/storage/     /mnt/storage/           (read-only - general storage)

Why This Design?
- Safety: Read-only production mounts prevent accidental deletion of 96GB+ model collections
- Validation: Admin can review uploads before promoting to production
- Organization: Separate staging area keeps production clean
- Transparency: Users can browse production models to verify availability

Network Configuration

Traefik Integration

The service is configured with:
- Automatic HTTPS with TLS termination
- HTTP to HTTPS redirect
- Extended timeouts for large file uploads
- Internal-only access (no external exposure)

Quick Start

Starting the Service

cd /mnt/apps/docker/ai/upload-service
docker compose up -d

Stopping the Service

cd /mnt/apps/docker/ai/upload-service
docker compose down

Accessing the Web UI

  1. Open https://upload.haiven.local in your browser
  2. Log in with admin credentials (see below)
  3. Navigate to /uploads/models/ to begin uploading

Configuration Files

File Purpose
docker-compose.yml Main Docker Compose configuration
.env Admin credentials (git-ignored)
config/settings.json FileBrowser application settings
database/filebrowser.db User accounts and FileBrowser state

Default Credentials

Username: admin

Password: Stored in .env file

To retrieve the password:

cat /mnt/apps/docker/ai/upload-service/.env

Security Note: The .env file is git-ignored and should never be committed to version control.

Upload Workflow

1. Upload Files

Using the web UI, upload files to the appropriate directory under /srv/uploads/:

2. Review Uploads

Admin reviews uploaded files via:
- FileBrowser web UI
- Direct SSH access to /mnt/storage/uploads/

Verify:
- File integrity (size, format)
- Correct naming conventions
- No duplicates with production

3. Promote to Production

Manually move approved files from staging to production:

# GGUF models
mv /mnt/storage/uploads/models/gguf/model-name.gguf /mnt/models/gguf/

# Stable Diffusion checkpoints
mv /mnt/storage/uploads/models/image/checkpoints/model.safetensors /mnt/models/image/checkpoints/

# LoRA models
mv /mnt/storage/uploads/models/image/loras/lora-model.safetensors /mnt/models/image/loras/

Important: After moving models to production, restart relevant services:

# Restart llama-swap to detect new GGUF models
docker compose -f /mnt/apps/docker/ai/llama-swap/docker-compose.yml restart

# Restart ComfyUI to detect new image models (when available)
docker compose -f /mnt/apps/docker/ai/comfyui/docker-compose.yml restart

Directory Structure

Upload Staging Area (/srv/uploads/models/)

/srv/uploads/models/
├── gguf/                      # GGUF LLM models
└── image/                     # Image generation models
    ├── checkpoints/           # SD/SDXL base models
    ├── loras/                 # LoRA fine-tuning models
    ├── vae/                   # VAE models
    ├── embeddings/            # Textual inversion embeddings
    ├── controlnet/            # ControlNet models
    └── text_encoders/         # CLIP and T5 encoders

Production Models (/srv/models/ - Read-Only)

Browse production models to verify availability:

/srv/models/
├── gguf/                      # Production GGUF models
└── image/                     # Production image models
    ├── checkpoints/
    ├── loras/
    └── ...

General Storage (/srv/storage/ - Read-Only)

Browse outputs and user data:

/srv/storage/
├── uploads/                   # (same as /srv/uploads/)
├── comfyui/                   # ComfyUI outputs
├── echo/                      # Echo/LibreChat data
└── ...

Resource Limits

The service is configured with the following resource constraints:

Resource Limit Reservation
CPU 4 cores 0.5 cores
Memory 2 GB 256 MB

These limits ensure the upload service doesn't impact GPU-based AI workloads while still supporting large file uploads.

Security Notes

File System Security

Network Security

Credential Management

Common Commands

Service Management

# Start the service
cd /mnt/apps/docker/ai/upload-service && docker compose up -d

# Stop the service
cd /mnt/apps/docker/ai/upload-service && docker compose down

# Restart the service
cd /mnt/apps/docker/ai/upload-service && docker compose restart

# Check status
docker ps | grep upload

View Logs

# Follow logs in real-time
docker logs -f upload

# View last 100 lines
docker logs --tail 100 upload

# View logs with timestamps
docker logs -t upload

Check Disk Usage

# Check staging area disk usage
du -sh /mnt/storage/uploads/

# Check by model type
du -sh /mnt/storage/uploads/models/*
du -sh /mnt/storage/uploads/models/image/*

Clean Up Staging Area

# Remove successfully promoted files
rm -rf /mnt/storage/uploads/models/gguf/*

# Remove files older than 30 days
find /mnt/storage/uploads/models/ -type f -mtime +30 -delete

Troubleshooting

Issue: Cannot log in with admin credentials

Solution: Reset admin password using environment variables:

  1. Update password in .env file:
    bash echo "UPLOAD_ADMIN_PASSWORD=NewSecurePassword123" > /mnt/apps/docker/ai/upload-service/.env

  2. Restart the service:
    bash docker compose -f /mnt/apps/docker/ai/upload-service/docker-compose.yml restart

  3. Delete the database to force recreation:
    bash docker compose -f /mnt/apps/docker/ai/upload-service/docker-compose.yml down rm /mnt/apps/docker/ai/upload-service/database/filebrowser.db docker compose -f /mnt/apps/docker/ai/upload-service/docker-compose.yml up -d

Issue: Upload fails or times out

Possible causes:
- File too large (check disk space)
- Network timeout (extend timeout in Traefik config)
- Insufficient memory (check resource usage)

Solutions:

# Check disk space
df -h /mnt/storage

# Check container memory usage
docker stats upload --no-stream

# Check logs for errors
docker logs upload --tail 100

Issue: Cannot see production models in /srv/models/

Possible causes:
- Models directory not mounted correctly
- Permission issues

Solutions:

# Verify mount points
docker exec upload ls -la /srv/

# Check permissions on host
ls -la /mnt/models/

# Restart container
docker compose -f /mnt/apps/docker/ai/upload-service/docker-compose.yml restart

Issue: Service won't start

Solution: Check logs and verify configuration:

# View startup logs
docker logs upload

# Verify networks exist
docker network ls | grep -E 'web|backend'

# Recreate networks if missing
docker network create web
docker network create backend

# Validate compose file
cd /mnt/apps/docker/ai/upload-service
docker compose config

Issue: 404 or "Site cannot be reached"

Possible causes:
- Traefik not running
- DNS not resolving upload.haiven.local

Solutions:

# Check Traefik status
docker ps | grep traefik

# Verify Traefik labels
docker inspect upload | grep traefik

# Test DNS resolution
ping upload.haiven.local

# Add to /etc/hosts if needed
echo "10.10.0.2 upload.haiven.local" | sudo tee -a /etc/hosts

Performance Tips

Large File Uploads

For very large files (>10GB):

  1. Use SSH/SCP instead: For files >20GB, consider using SCP directly:
    bash scp -P 25636 large-model.gguf user@server:/mnt/storage/uploads/models/gguf/

  2. Split large files: If upload fails, split the file:
    bash split -b 5G large-model.gguf large-model.part-
    Then upload parts and reassemble on server.

  3. Use rsync for resumable uploads:
    bash rsync -avz -P -e "ssh -p 25636" large-model.gguf user@server:/mnt/storage/uploads/models/gguf/

Batch Operations

For uploading many small files, use the web UI's batch upload feature or rsync:

# Upload entire directory
rsync -avz -P -e "ssh -p 25636" local-models/ user@server:/mnt/storage/uploads/models/gguf/

Additional Information

Service Dependencies

Monitoring

The service logs to JSON files with rotation:
- Max size: 50MB per file
- Max files: 3 rotated files
- Location: Managed by Docker (use docker logs)

Backup Recommendations

Integration with Other Services

Support

For issues or questions:
1. Check logs: docker logs upload
2. Review troubleshooting section above
3. Consult implementation documentation in this directory
4. Check Haiven server documentation: /mnt/apps/docker/CLAUDE.md