Cronicle User Guide

This guide covers day-to-day usage of Cronicle for managing scheduled tasks on the Haiven server.

Table of Contents

  1. Accessing the Web UI
  2. Understanding the Dashboard
  3. Managing Jobs
  4. Job Scripts
  5. Schedules
  6. Notifications
  7. Viewing History
  8. Best Practices

Accessing the Web UI

URL

Open your browser and navigate to:

https://scheduler.haiven.local

First-Time Login

  1. Default username: admin
  2. Default password: admin
  3. Immediately change your password:
    - Click your username (top right)
    - Select My Account
    - Change password
    - Save

The main navigation bar includes:

Tab Purpose
Home Dashboard with active jobs and server status
Schedule Calendar view of upcoming jobs
Jobs Job list and management
Job History Execution history and logs
Admin Server configuration (admin only)

Understanding the Dashboard

Server Status

The dashboard shows:
- Active Jobs - Currently running jobs
- Queued Jobs - Jobs waiting to run
- Server Health - CPU, memory, and disk status
- Recent Activity - Latest job executions

Alerts

Warning indicators appear for:
- Failed jobs
- Jobs taking longer than expected
- Server resource issues


Managing Jobs

Creating a New Job

  1. Navigate to JobsNew Job
  2. Fill in the basic settings:
Field Description
Title Descriptive name for the job
Category Group (Maintenance, Monitoring, AI Services, Backups)
Plugin Execution type (Shell Script recommended)
Target Server group to run on (usually "All Servers")
  1. Configure the Schedule (see Schedules)
  2. Add your Script content
  3. Click Save

Editing a Job

  1. Go to Jobs list
  2. Click the job name
  3. Modify settings
  4. Click Save

Running a Job Manually

  1. Find the job in the Jobs list
  2. Click Run Now (play button)
  3. View progress in Job History

Enabling/Disabling Jobs

Deleting a Job

  1. Go to Jobs list
  2. Click the job name
  3. Scroll to bottom
  4. Click Delete Job (requires confirmation)

Job Scripts

Shell Script Basics

Jobs use standard bash scripting:

#!/bin/bash
set -e  # Exit on error

echo "=== Job Started - $(date) ==="

# Your commands here
docker ps

echo "=== Job Complete ==="

Available Paths

Scripts have access to these mounted paths:

Path in Container Host Path Access
/mnt/apps/docker /mnt/apps/docker Read-only
/host/scripts /usr/local/bin Read-only
/host/var/log /var/log Read-only
/mnt/models /mnt/models Read-only

Docker Commands

The Docker socket is mounted, so you can run:

# List containers
docker ps

# Restart a service
docker restart llama-swap

# View logs
docker logs --tail 50 echo

# Check container health
docker inspect cronicle --format='{{.State.Health.Status}}'

Exit Codes

Use exit codes to signal job status:

#!/bin/bash
set -e

ERROR_COUNT=$(grep -c "ERROR" /host/var/log/syslog || echo 0)

if [ "$ERROR_COUNT" -gt 100 ]; then
    echo "HIGH ERROR COUNT: $ERROR_COUNT"
    exit 1  # Triggers failure notification
fi

echo "Error count OK: $ERROR_COUNT"
exit 0

Schedules

Schedule Types

Type Example
Once Run at a specific date/time
Daily Run every day at a set time
Hourly Run every hour at :00, :15, :30, or :45
Weekly Run on specific days of the week
Monthly Run on specific days of the month
On Demand Manual execution only (no schedule)

Setting a Schedule

  1. In job settings, find the Timing section
  2. Select schedule type
  3. Configure specific times
  4. Save

Examples

Daily at 2 AM:
- Type: Daily
- Time: 02:00

Every hour:
- Type: Hourly
- Minute: 00

Weekly on Sunday:
- Type: Weekly
- Days: Sunday
- Time: 03:00

Every 15 minutes:
- Type: Hourly
- Minutes: 00, 15, 30, 45


Notifications

Configuring Webhooks

  1. Go to AdminNotification
  2. Click Add Notification
  3. Select Web Hook type
  4. Enter URL (e.g., Uptime Kuma push URL)
  5. Choose trigger: Success, Failure, or Both
  6. Save

Job-Level Notifications

  1. Edit a job
  2. Find Notifications section
  3. Enable notifications for:
    - Job Start
    - Job Success
    - Job Failure

Uptime Kuma Integration

For critical jobs, create push monitors:

  1. In Uptime Kuma, create a Push monitor
  2. Set heartbeat interval to match job frequency
  3. Copy the push URL
  4. In Cronicle, configure job to call this URL on success

Viewing History

Job History Tab

Shows all job executions with:
- Start/end times
- Duration
- Exit code
- Quick access to logs

Filtering History

Use filters to find specific runs:
- By job name
- By date range
- By status (success/failure)

Viewing Job Output

  1. Click on a history entry
  2. View full output log
  3. Download log if needed

Retention Settings

Configure how long to keep history:

  1. Go to AdminMaintenance
  2. Set Job Data Expiration (default: 90 days)
  3. Set Job Log Expiration (default: 30 days)

Best Practices

Job Organization

  1. Use Categories: Group related jobs
    - Maintenance
    - Monitoring
    - AI Services
    - Backups

  2. Clear Naming: Include purpose and frequency
    - Good: Daily Log Analysis
    - Bad: script1

Script Guidelines

  1. Start with set -e: Exit on first error
  2. Add timestamps: Help with debugging
  3. Echo progress: Show what's happening
  4. Use exit codes: Signal success/failure
  5. Clean up: Remove temp files

Example template:

#!/bin/bash
set -e

echo "=== $(date) - Starting $0 ==="

# Main work
do_something

# Cleanup
rm -f /tmp/work_file

echo "=== $(date) - Complete ==="

Monitoring

  1. Enable notifications for critical jobs
  2. Check history regularly for silent failures
  3. Monitor job duration - sudden changes indicate issues
  4. Use Uptime Kuma push monitors for critical tasks

Security

  1. Never hardcode secrets in job scripts
  2. Use environment variables from .env file
  3. Verify webhook URLs before adding
  4. Review job logs for sensitive output

Troubleshooting

Job Stuck in Running

  1. Check if the script has an infinite loop
  2. Look at current output in Job History
  3. Manually abort: Click Abort in job details
  4. Review script for timeout issues

Job Fails Immediately

  1. Check script syntax: bash -n script.sh
  2. Verify all paths exist
  3. Check permissions on files/directories
  4. Review error output in Job History

Missing Output

  1. Ensure script uses echo for output
  2. Check for buffered output (add stdbuf -oL prefix)
  3. Verify script doesn't redirect stdout/stderr

Can't Access Web UI

  1. Verify container is running: docker ps | grep cronicle
  2. Check Traefik logs: docker logs traefik | grep cronicle
  3. Test direct access: curl http://localhost:3012/
  4. Verify DNS: ping scheduler.haiven.local

Pre-Configured Jobs Reference

Journal Cleanup (Daily 2 AM)

Cleans systemd journal logs to prevent disk fill:

#!/bin/bash
set -e
journalctl --vacuum-time=30d
journalctl --vacuum-size=2G

Log Analysis (Daily 7 AM)

Reports system errors and health metrics:

#!/bin/bash
set -e
# Counts errors, failed logins
# Checks container health
# Reports disk usage

Disk Space Monitor (Hourly)

Alerts on low disk space:

#!/bin/bash
set -e
FREE_GB=$(df / | awk 'NR==2 {print int($4/1024/1024)}')
if [ "$FREE_GB" -lt 10 ]; then
    exit 1  # Triggers notification
fi

Log Ship to NAS (Weekly Sunday 3 AM)

Archives logs to network storage:

#!/bin/bash
set -e
tar -czf /mnt/nas1/backups/logs/haiven/logs-$(date +%Y-%m-%d).tar.gz /host/var/log
find /mnt/nas1/backups/logs/haiven -mtime +90 -delete

Model Discovery (Daily 6 AM)

Scans for new GGUF models:

#!/bin/bash
set -e
cd /mnt/apps/docker/ai/llama-swap
python3 scripts/discover-models.py

Getting Help