A lightweight monitoring application for Proxmox VE that displays real-time status for VMs and containers via a simple web interface.
Click to view more screenshots
Desktop Views:
Mobile Views:
Choose your preferred installation method:
One-command installation in a new LXC container:
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/pulse.sh)"
This will create a new LXC container and install Pulse automatically. Visit the Community Scripts page for details.
For existing Docker hosts:
mkdir pulse-config && cd pulse-config
# Create docker-compose.yml (see Docker section)
docker compose up -d
# Configure via web interface at http://localhost:7655
For existing LXC containers:
curl -sLO https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-pulse.sh
chmod +x install-pulse.sh
sudo ./install-pulse.sh
- Quick Start
- Prerequisites
- Configuration
- Deployment Options
- Features
- System Requirements
- Updating Pulse
- Contributing
- Privacy
- License
- Trademark Notice
- Support
- Troubleshooting
Before installing Pulse, ensure you have:
For Proxmox VE:
- Proxmox VE 7.x or 8.x running
- Admin access to create API tokens
- Network connectivity between Pulse and Proxmox (ports 8006/8007)
For Pulse Installation:
- Community Scripts: Just a Proxmox host (handles everything automatically)
- Docker: Docker & Docker Compose installed
- Manual LXC: Existing Debian/Ubuntu LXC with internet access
✨ Easiest method - fully automated LXC creation and setup:
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/pulse.sh)"
This script will:
- Create a new LXC container automatically
- Install all dependencies (Node.js, npm, etc.)
- Download and set up Pulse
- Set up systemd service
After installation: Access Pulse at http://<lxc-ip>:7655
and configure via the web interface
Visit the Community Scripts page for more details.
For existing Docker hosts - uses pre-built image:
Prerequisites:
- Docker (Install Docker)
- Docker Compose (Install Docker Compose)
Steps:
- Create a Directory: Make a directory for your Docker configuration files:
mkdir pulse-config cd pulse-config
- Create
docker-compose.yml
file: Create a file nameddocker-compose.yml
in this directory with the following content:# docker-compose.yml services: pulse-server: image: rcourtman/pulse:latest # Pulls the latest pre-built image container_name: pulse restart: unless-stopped ports: # Map host port 7655 to container port 7655 # Change the left side (e.g., "8081:7655") if 7655 is busy on your host - "7655:7655" volumes: # Persistent volume for configuration data # Configuration persists across container updates - pulse_config:/usr/src/app/config # Define persistent volumes volumes: pulse_config: driver: local
- Run: Start the container:
docker compose up -d
- Access and Configure: Open your browser to
http://<your-docker-host-ip>:7655
and configure through the web interface.
For existing Debian/Ubuntu LXC containers:
Prerequisites:
- A running Proxmox VE host
- An existing Debian or Ubuntu LXC container with network access to Proxmox
- Tip: Use Community Scripts to easily create one:
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/debian.sh)"
- Tip: Use Community Scripts to easily create one:
Steps:
- Access LXC Console: Log in to your LXC container (usually as
root
). - Download and Run Script:
# Ensure you are in a suitable directory, like /root or /tmp curl -sLO https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-pulse.sh chmod +x install-pulse.sh ./install-pulse.sh
- Follow Prompts: The script guides you through:
- Installing dependencies (
git
,curl
,nodejs
,npm
,sudo
). - Setting up Pulse as a
systemd
service (pulse-monitor.service
). - Optionally enabling automatic updates via cron.
- Installing dependencies (
- Access and Configure: The script will display the URL (e.g.,
http://<LXC-IP-ADDRESS>:7655
). Open this URL and configure via the web interface.
For update instructions, see the Updating Pulse section.
Use this method if you have cloned the repository and want to build and run the application from the local source code.
- Get Files: Clone the repository (
git clone https://github.com/rcourtman/Pulse.git && cd Pulse
) - Run:
docker compose up --build -d
(The includeddocker-compose.yml
uses thebuild:
context by default). - Access and Configure: Open your browser to
http://localhost:7655
(or your host IP if Docker runs remotely) and configure via the web interface.
Pulse features a comprehensive web-based configuration system accessible through the settings menu. No manual file editing required!
First-time Setup:
- Access Pulse at
http://your-host:7655
- The settings modal will automatically open for initial configuration
- Configure all your Proxmox VE and PBS servers through the intuitive web interface
- Test connections with built-in connectivity verification
- Save and reload configuration without restarting the application
Ongoing Management:
- Click the settings icon (⚙️) in the top-right corner anytime
- Add/modify multiple PVE and PBS endpoints
- Configure alert thresholds and service intervals
- All changes are applied immediately
For advanced users or development setups, Pulse can also be configured using environment variables in a .env
file.
These are the minimum required variables:
PROXMOX_HOST
: URL of your Proxmox server (e.g.,https://192.168.1.10:8006
).PROXMOX_TOKEN_ID
: Your API Token ID (e.g.,user@pam!tokenid
).PROXMOX_TOKEN_SECRET
: Your API Token Secret.
Optional variables:
PROXMOX_NODE_NAME
: A display name for this endpoint in the UI (defaults toPROXMOX_HOST
).PROXMOX_ALLOW_SELF_SIGNED_CERTS
: Set totrue
if your Proxmox server uses self-signed SSL certificates. Defaults tofalse
.PORT
: Port for the Pulse server to listen on. Defaults to7655
.BACKUP_HISTORY_DAYS
: Number of days of backup history to display (defaults to365
for full year calendar view).- (Username/Password fallback exists but API Token is strongly recommended)
Pulse includes a comprehensive alert system that monitors resource usage and system status:
# Alert System Configuration
ALERT_CPU_ENABLED=true
ALERT_MEMORY_ENABLED=true
ALERT_DISK_ENABLED=true
ALERT_DOWN_ENABLED=true
# Alert thresholds (percentages)
ALERT_CPU_THRESHOLD=85
ALERT_MEMORY_THRESHOLD=90
ALERT_DISK_THRESHOLD=95
# Alert durations (milliseconds - how long condition must persist)
ALERT_CPU_DURATION=300000 # 5 minutes
ALERT_MEMORY_DURATION=300000 # 5 minutes
ALERT_DISK_DURATION=600000 # 10 minutes
ALERT_DOWN_DURATION=60000 # 1 minute
Alert features include:
- Real-time notifications with toast messages
- Multi-severity alerts (Critical, Warning, Resolved)
- Duration-based triggering (alerts only fire after conditions persist)
- Automatic resolution when conditions normalize
- Alert history tracking
For advanced monitoring scenarios, Pulse supports custom alert thresholds on a per-VM/LXC basis through the web interface:
Use Cases:
- Storage/NAS VMs: Set higher memory thresholds (e.g., 95%/99%) for VMs that naturally use high memory for disk caching
- Application Servers: Set lower CPU thresholds (e.g., 70%/85%) for performance-critical applications
- Development VMs: Set custom disk thresholds (e.g., 75%/90%) for early storage warnings
Configuration:
- Navigate to Settings → Custom Thresholds tab
- Click "Add Custom Threshold"
- Select your VM/LXC from the dropdown
- Configure custom CPU, Memory, and/or Disk thresholds
- Save configuration
Features:
- Migration-aware: Thresholds follow VMs when they migrate between cluster nodes
- Per-metric control: Configure only the metrics you need (CPU, Memory, Disk)
- Visual indicators: VMs with custom thresholds show a blue "T" badge in the dashboard
- Fallback behavior: VMs without custom thresholds use global settings
Note: For a Proxmox cluster, you only need to provide connection details for one node. Pulse automatically discovers other cluster members.
To monitor separate Proxmox environments (e.g., different clusters, sites) in one Pulse instance, add numbered variables:
PROXMOX_HOST_2
,PROXMOX_TOKEN_ID_2
,PROXMOX_TOKEN_SECRET_2
PROXMOX_HOST_3
,PROXMOX_TOKEN_ID_3
,PROXMOX_TOKEN_SECRET_3
- ...and so on.
Optional numbered variables also exist (e.g., PROXMOX_ALLOW_SELF_SIGNED_CERTS_2
, PROXMOX_NODE_NAME_2
).
To monitor PBS instances:
Primary PBS Instance:
PBS_HOST
: URL of your PBS server (e.g.,https://192.168.1.11:8007
).PBS_TOKEN_ID
: Your PBS API Token ID (e.g.,user@pbs!tokenid
). See Creating a Proxmox Backup Server API Token.PBS_TOKEN_SECRET
: Your PBS API Token Secret.PBS_NODE_NAME
: Important! The internal hostname of your PBS server (e.g.,pbs-server-01
). This is usually required for API token auth because the token might lack permission to auto-discover the node name. See details below.PBS_ALLOW_SELF_SIGNED_CERTS
: Set totrue
for self-signed certificates. Defaults tofalse
.PBS_PORT
: PBS API port. Defaults to8007
.
Additional PBS Instances:
To monitor multiple PBS instances, add numbered variables, starting with _2
:
PBS_HOST_2
,PBS_TOKEN_ID_2
,PBS_TOKEN_SECRET_2
PBS_HOST_3
,PBS_TOKEN_ID_3
,PBS_TOKEN_SECRET_3
- ...and so on.
Optional numbered variables also exist for additional PBS instances (e.g., PBS_NODE_NAME_2
, PBS_ALLOW_SELF_SIGNED_CERTS_2
, PBS_PORT_2
). Each PBS instance, whether primary or additional, requires its respective PBS_NODE_NAME
or PBS_NODE_NAME_n
to be set if API token authentication is used and the token cannot automatically discover the node name.
Why PBS_NODE_NAME
(or PBS_NODE_NAME_n
) is Required (Click to Expand)
Pulse needs to query task lists specific to the PBS node (e.g., /api2/json/nodes/{nodeName}/tasks
). It attempts to discover this node name automatically by querying /api2/json/nodes
. However, this endpoint is often restricted for API tokens (returning a 403 Forbidden error), even for tokens with high privileges, unless the Sys.Audit
permission is granted on the root path (/
).
Therefore, setting PBS_NODE_NAME
in your .env
file is the standard and recommended way to ensure Pulse can correctly query task endpoints when using API token authentication. If it's not set and automatic discovery fails due to permissions, Pulse will be unable to fetch task data (backups, verifications, etc.).
How to find your PBS Node Name:
- SSH: Log into your PBS server via SSH and run
hostname
. - UI: Log into the PBS web interface. The hostname is typically displayed on the Dashboard under Server Status.
Example: If your PBS connects via https://minipc-pbs.lan:8007
but its internal hostname is proxmox-backup-server
, set:
PBS_HOST=https://minipc-pbs.lan:8007
PBS_NODE_NAME=proxmox-backup-server
Using an API token is the recommended authentication method.
Steps to Create a PVE API Token (Click to Expand)
- Log in to the Proxmox VE web interface.
- Create a dedicated user (optional but recommended):
- Go to
Datacenter
→Permissions
→Users
. - Click
Add
. Enter aUser name
(e.g., "pulse-monitor"), set Realm toProxmox VE authentication server
(pam
), set a password, ensureEnabled
. ClickAdd
.
- Go to
- Create an API token:
- Go to
Datacenter
→Permissions
→API Tokens
. - Click
Add
. - Select the
User
(e.g., "pulse-monitor@pam") orroot@pam
. - Enter a
Token ID
(e.g., "pulse"). - Leave
Privilege Separation
checked. ClickAdd
. - Important: Copy the
Secret
value immediately. It's shown only once.
- Go to
- Assign permissions (to User and Token):
- Go to
Datacenter
→Permissions
. - Add User Permission: Click
Add
→User Permission
. Path:/
, User:pulse-monitor@pam
, Role:PVEAuditor
, checkPropagate
. ClickAdd
. - Add Token Permission: Click
Add
→API Token Permission
. Path:/
, API Token:pulse-monitor@pam!pulse
, Role:PVEAuditor
, checkPropagate
. ClickAdd
. - Note: The
PVEAuditor
role at the root path (/
) withPropagate
is crucial.
- Go to
- Update
.env
: SetPROXMOX_TOKEN_ID
(e.g.,pulse-monitor@pam!pulse
) andPROXMOX_TOKEN_SECRET
(the secret you copied).
If monitoring PBS, create a token within the PBS interface.
Steps to Create a PBS API Token (Click to Expand)
- Log in to the Proxmox Backup Server web interface.
- Create a dedicated user (optional but recommended):
- Go to
Configuration
→Access Control
→User Management
. - Click
Add
. EnterUser ID
(e.g., "pulse-monitor@pbs"), set Realm (likelypbs
), add password. ClickAdd
.
- Go to
- Create an API token:
- Go to
Configuration
→Access Control
→API Token
. - Click
Add
. - Select
User
(e.g., "pulse-monitor@pbs") orroot@pam
. - Enter
Token Name
(e.g., "pulse"). - Leave
Privilege Separation
checked. ClickAdd
. - Important: Copy the
Secret
value immediately.
- Go to
- Assign permissions (to User and Token):
- Go to
Configuration
→Access Control
→Permissions
. - Add User Permission: Click
Add
→User Permission
. Path:/
, User:pulse-monitor@pbs
, Role:Audit
, checkPropagate
. ClickAdd
. - Add API Token Permission: Click
Add
→API Token Permission
. Path:/
, API Token:pulse-monitor@pbs!pulse
, Role:Audit
, checkPropagate
. ClickAdd
. - Note: The
Audit
role at root path (/
) withPropagate
is crucial for both user and token.
- Go to
- Update
.env
: SetPBS_TOKEN_ID
(e.g.,pulse-monitor@pbs!pulse
) andPBS_TOKEN_SECRET
.
-
Proxmox VE:
- Basic monitoring: The
PVEAuditor
role assigned at path/
withPropagate
enabled. - To view PVE backup files: Additionally requires
PVEDatastoreAdmin
role on/storage
(or specific storage paths).
Important: Storage Content Visibility (Click to Expand)
Due to Proxmox API limitations, viewing backup files in storage requires elevated permissions:
PVEAuditor
alone is NOT sufficient to list storage contents via API- You must grant
PVEDatastoreAdmin
role which includesDatastore.Allocate
permission - This applies even for read-only access to backup listings
To fix empty PVE backup listings:
# Grant storage admin permissions to your API token pveum acl modify /storage --tokens user@realm!tokenname --roles PVEDatastoreAdmin
Permissions included in PVEAuditor (Click to Expand)
- `Datastore.Audit` - `Permissions.Read` (implicitly included) - `Pool.Audit` - `Sys.Audit` - `VM.Audit` - Basic monitoring: The
-
Proxmox Backup Server: The
Audit
role assigned at path/
withPropagate
enabled is recommended.
For users who prefer not to use Docker or the LXC script, pre-packaged release tarballs are available.
Prerequisites:
- Node.js (Version 18.x or later recommended)
- npm (comes with Node.js)
tar
command (standard on Linux/macOS, available via tools like 7-Zip or WSL on Windows)
Steps:
- Download: Go to the Pulse GitHub Releases page. Download the
pulse-vX.Y.Z.tar.gz
file for the desired release. - Extract: Create a directory and extract the tarball:
mkdir pulse-app cd pulse-app tar -xzf /path/to/downloaded/pulse-vX.Y.Z.tar.gz # This creates a directory like pulse-vX.Y.Z/ cd pulse-vX.Y.Z
- Run: Start the application using npm:
(Note: The tarball includes pre-installed production dependencies, so
npm start
npm install
is not typically required unless you encounter issues.) - Access and Configure: Open your browser to
http://<your-server-ip>:7655
and configure via the web interface.
For development purposes or running directly from source, see the DEVELOPMENT.md guide. This involves cloning the repository, installing dependencies using npm install
in both the root and server
directories, and running npm run dev
or npm run start
.
- Lightweight monitoring for Proxmox VE nodes, VMs, and Containers.
- Real-time status updates via WebSockets.
- Simple, responsive web interface.
- Comprehensive backup monitoring:
- Proxmox Backup Server (PBS) snapshots and tasks
- PVE backup files stored on local and shared storage
- VM/CT snapshot tracking with calendar heatmap visualization
- Built-in diagnostic tool with API permission testing and troubleshooting guidance.
- Advanced alert system:
- Configurable global thresholds and durations
- Custom per-VM/LXC alert thresholds (perfect for storage VMs, application servers, etc.)
- Migration-aware thresholds that follow VMs across cluster nodes
- Efficient polling: Stops API polling when no clients are connected.
- Docker support.
- Multi-environment PVE monitoring support.
- LXC installation script.
- Node.js: Version 18.x or later (if building/running from source).
- NPM: Compatible version with Node.js.
- Docker & Docker Compose: Latest stable versions (if using container deployment).
- Proxmox VE: Version 7.x or 8.x recommended.
- Proxmox Backup Server: Version 2.x or 3.x recommended (if monitored).
- Web Browser: Modern evergreen browser.
For non-Docker installations, Pulse includes a built-in update mechanism:
- Open the Settings modal (gear icon in the top right)
- Scroll to the "Software Updates" section
- Click "Check for Updates"
- If an update is available, review the release notes
- Click "Apply Update" to install it automatically
The update process:
- Backs up your configuration files
- Downloads and applies the update
- Preserves your settings
- Automatically restarts the application
If you installed using the Community Scripts method, simply re-run the original installation command:
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/pulse.sh)"
The script will detect the existing installation and update it automatically.
Docker deployments must be updated by pulling the new image:
cd /path/to/your/pulse-config
docker compose pull
docker compose up -d
This pulls the latest image and recreates the container with the new version.
Note: The web-based update feature will detect Docker deployments and provide these instructions instead of attempting an in-place update.
If you used the manual installation script, update by re-running it:
# Navigate to where you downloaded the script
cd /path/to/script/directory
./install-pulse.sh
Or run non-interactively (useful for automated updates):
./install-pulse.sh --update
Managing the Service:
- Check status:
sudo systemctl status pulse-monitor.service
- View logs:
sudo journalctl -u pulse-monitor.service -f
- Restart:
sudo systemctl restart pulse-monitor.service
Automatic Updates:
If you enabled automatic updates during installation, they run via cron. Check logs in /var/log/pulse_update.log
.
To update a tarball installation:
- Download the latest release from GitHub Releases
- Stop the current application
- Extract the new tarball to a new directory
- Start the application:
npm start
- Your configuration will be preserved automatically
If running from source code:
cd /path/to/pulse
git pull origin main
npm install
npm run build:css
npm run start # or your preferred restart method
Note: The development setup only requires npm install in the root directory, not in a separate server directory.
Contributions are welcome! Please read our Contributing Guidelines.
- No Data Collection: Pulse does not collect or transmit any telemetry or user data externally.
- Local Communication: Operates entirely between your environment and your Proxmox/PBS APIs.
- Credential Handling: Credentials are used only for API authentication and are not logged or sent elsewhere.
This project is licensed under the MIT License - see the LICENSE file.
Proxmox® and Proxmox VE® are registered trademarks of Proxmox Server Solutions GmbH. This project is not affiliated with or endorsed by Proxmox Server Solutions GmbH.
File issues on the GitHub repository.
If you find Pulse useful, consider supporting its development:
Can't access Pulse after installation?
# Check if service is running
sudo systemctl status pulse-monitor.service
# Check what's listening on port 7655
sudo netstat -tlnp | grep 7655
# View recent logs
sudo journalctl -u pulse-monitor.service -f
Empty dashboard or "No data" errors?
- Check API Token: Verify your
PROXMOX_TOKEN_ID
andPROXMOX_TOKEN_SECRET
are correct - Test connectivity: Can you ping your Proxmox host from where Pulse is running?
- Check permissions: Ensure token has
PVEAuditor
role on path/
withPropagate
enabled
"Empty Backups Tab" with PBS configured?
- Ensure
PBS Node Name
is configured in the settings modal - Find hostname with:
ssh root@your-pbs-ip hostname
Docker container won't start?
# Check container logs
docker logs pulse
# Restart container
docker compose down && docker compose up -d
Pulse includes a comprehensive built-in diagnostic tool to help troubleshoot configuration and connectivity issues:
Web Interface (Recommended):
- The diagnostics icon appears automatically in the header when issues are detected
- Click the icon or navigate to
http://your-pulse-host:7655/diagnostics.html
- The tool will automatically run diagnostics and provide:
- API Token Permission Testing - Tests actual API permissions for VMs, containers, nodes, and datastores
- Configuration Validation - Verifies all connection settings and required parameters
- Real-time Connectivity Tests - Tests live connections to Proxmox VE and PBS instances
- Data Flow Analysis - Shows discovered nodes, VMs, containers, and backup data
- Specific Actionable Recommendations - Detailed guidance for fixing any issues found
Key Features:
- Tests use the same API endpoints as the main application for accuracy
- Provides exact permission requirements (e.g.,
VM.Audit
on/
for Proxmox) - Shows counts of discovered resources (VMs, containers, nodes, backups)
- Identifies common misconfigurations like missing
PBS_NODE_NAME
- Privacy Protected: Automatically sanitizes hostnames, IPs, and sensitive data before export
- Export diagnostic reports safe for sharing in GitHub issues or support requests
Command Line:
# If using the source code:
./scripts/diagnostics.sh
# The script will generate a detailed report and save it to a timestamped file
- Empty Backups Tab:
- PBS backups not showing: Usually caused by missing
PBS Node Name
in the settings configuration. SSH to your PBS server and runhostname
to find the correct value. - PVE backups not showing: Ensure your API token has
PVEDatastoreAdmin
role on/storage
to view backup files. See the permissions section above.
- PBS backups not showing: Usually caused by missing
- Pulse Application Logs: Check container logs (
docker logs pulse_monitor
) or service logs (sudo journalctl -u pulse-monitor.service -f
) for errors (401 Unauthorized, 403 Forbidden, connection refused, timeout). - Configuration Issues: Use the settings modal to verify all connection details. Test connections with the built-in connectivity tester before saving. Ensure no placeholder values remain.
- Network Connectivity: Can the machine running Pulse reach the PVE/PBS hostnames/IPs and ports (usually 8006 for PVE, 8007 for PBS)? Check firewalls.
- API Token Permissions: Ensure the correct roles (
PVEAuditor
for PVE,Audit
for PBS) are assigned at the root path (/
) withPropagate
enabled in the respective UIs.