Skip to content

enix/pvecontrol

Repository files navigation

Proxmox VE Control

release workflow PyPI release PyPI downloads

pvecontrol (https://pypi.org/project/pvecontrol/) is a CLI tool to manage Proxmox VE clusters and perform intermediate and advanced tasks that aren't available (or aren't straightforward) in the Proxmox web UI or default CLI tools.

It was written by (and for) teams managing multiple Proxmox clusters, sometimes with many hypervisors. Conversely, if your Proxmox install consists of a single cluster with a single node, the features of pvecontrol might not be very interesting for you!

Here are a few examples of things you can do with pvecontrol:

  • List all VMs across all hypervisors, along with their state and size;
  • Evacuate (=drain) a hypervisor, i.e. migrate all VMs that are running on that hypervisor, automatically picking nodes with enough capacity to host these VMs.
  • Run sanity checks on a cluster. Sanity checks are sets of tests designed to verify the integrity of the cluster.

To communicate with Proxmox VE, pvecontrol uses proxmoxer, a wonderful library that enables communication with various Proxmox APIs.

Installation

pvecontrol requires Python version 3.9 or above.

The easiest way to install it is simply using pip. New versions are automatically published to pypi repository. It is recommended to use pipx in order to automatically create a dedicated python virtual environment:

pipx install pvecontrol

Configuration

To use pvecontrol, you must create a YAML configuration in $HOME/.config/pvecontrol/config.yaml. That file will list your clusters and how to authenticate with them.

pvecontrol only uses the Proxmox HTTP API, which means that you can use most Proxmox authentication mechanisms, including @pve realm users and tokens.

HTTPS certificate verification is disabled by default, but can be enabled using the ssl_verify boolean.

As an example, here's how to set up a dedicated user for pvecontrol, with read-only access to the Proxmox API:

pveum user add pvecontrol@pve --password my.password.is.weak
pveum acl modify / --roles PVEAuditor --users pvecontrol@pve

You can then create the following configuration file in $HOME/.config/pvecontrol/config.yaml:

clusters:
  - name: fr-par-1
    host: localhost
    user: pvecontrol@pve
    password: my.password.is.weak
    ssl_verify: true

And see pvecontrol in action right away:

pvecontrol -c fr-par-1 vm list

If you plan to use pvecontrol to move VMs around, you should grant it PVEVMAdmin permissions:

pveum acl modify / --roles PVEVMAdmin --users pvecontrol@pve

API tokens

pvecontrol also supports authentication with API tokens. A Proxmox API token is associated to an individual user, and can be given separate permissions and expiration dates. You can learn more about Proxmox tokens in this section of the Proxmox documentation.

As an example, to create a new API token associated to the pvecontrol@pve user and inherit all its permissions, you can use the following command:

pveum user token add pvecontrol@pve mytoken --privsep 0

Then, retrieve the token value, and add it to the configuration file to use it to authenticate:

clusters:
  - name: fr-par-1
    host: localhost
    user: pvecontrol@pve
    token_name: mytoken
    token_value: randomtokenvalue

Reverse proxies

pvecontrol supports certificate-based authentication to a reverse proxy. Which makes it suitable for use with tools like teleport using teleport apps.

clusters:
  - name: fr-par-1
    host: localhost
    user: pvecontrol@pve
    password: my.password.is.weak
    proxy_certificate_path: /tmp/proxmox-reverse-proxy.pem
    proxy_certificate_key_path: /tmp/proxmox-reverse-proxy

You can also use command substitution syntax and the key proxy_certificate to execute a command that will output a JSON document containing the certificate and key paths.

clusters:
  - name: fr-par-1
    host: localhost
    user: pvecontrol@pve
    password: my.password.is.weak
    proxy_certificate: $(my_custom_command login proxmox-fr-par-1)

It should output something like this:

{
  "cert": "/tmp/proxmox-reverse-proxy.pem",
  "key": "/tmp/proxmox-reverse-proxy",
  "anything_else": "it is ok to have other fields, they will be ignored. this is to support existing commands"
}

CAUTION: environment variable and ~ expansion and are not supported.

Better security

Instead of specifying users, passwords and certificates paths in plain text in the configuration file, you can use the shell command substitution syntax $(...) inside the user, password, proxy_certificate fields; for instance:

clusters:
  - name: prod-cluster-1
    host: 10.10.10.10
    user: pvecontrol@pve
    ssl_verify: true
    password: $(command to get -password)

Worse security

You can use @pam users (and even root@pam) and passwords in the pvecontrol YAML configuration file; but you probably should not, as anyone with read access to the configuration file would then automatically gain shell access to your Proxmox hypervisor. Not recommended in production!

Advanced configuration options

The configuration file can include a node: section to specify CPU and memory policies. These will be used when scheduling a VM (i.e. determine on which node it should run), specifically when draining a node for maintenance.

There are currently two parameters: cpufactor and memoryminimum.

cpufactor indicates the level of overcommit allowed on a hypervisor. 1 means no overcommit at all; 5 means "a hypervisor with 8 cores can run VMs with up to 5x8 = 40 cores in total".

memoryminimum is the amount of memory that should always be available on a node, in bytes. When scheduling a VM (for instance, when automatically moving VMs around), pvecontrol will make sure that this amount of memory remains available for the hypervisor OS itself. Caution: if that amount is set to zero, it will be possible to allocate the entire host memory to virtual machines, leaving no memory for the hypervisor operating system and management daemons!

These options can be specified in a global node: section, and then overridden per cluster.

Here is a configuration file showing this in action:

---
node:
  # Overcommit CPU factor
  # 1 = no overcommit
  cpufactor: 2.5
  # Memory to reserve for the system on a node (in bytes)
  memoryminimum: 8589934592
clusters:
- name: my-test-cluster
    host: 192.168.1.10
    user: pvecontrol@pve
    password: superpasssecret
    # Override global values for this cluster
    node:
      cpufactor: 1
- name: prod-cluster-1
    host: 10.10.10.10
    user: pvecontrol@pve
    password: Supers3cUre
- name: prod-cluster-2
    host: 10.10.10.10
    user: $(command to get -user)
    password: $(command to get -password)
- name: prod-cluster-3
    host: 10.10.10.10
    user: morticia@pve
    token_name: pvecontrol
    token_value: 12345678-abcd-abcd-abcd-1234567890ab

Usage

Here is a quick overview of pvecontrol commands and options, it may evolve over time:

$ pvecontrol --help
Usage: pvecontrol [OPTIONS] COMMAND [ARGS]...

  Proxmox VE control CLI, version: x.y.z

Options:
  -d, --debug
  -o, --output [text|json|csv|yaml|md]
                                  [default: text]
  -c, --cluster NAME              Proxmox cluster name as defined in
                                  configuration  [required]
  --unicode / --no-unicode        Use unicode characters for output
  --color / --no-color            Use colorized output
  --help                          Show this message and exit.

Commands:
  node evacuate  Evacuate a node by migrating all it's VM out to one or...
  node list      List nodes in the cluster
  sanitycheck    Check status of proxmox Cluster
  status         Show cluster status
  storage list   List storages in the cluster
  task get       Get detailled information about a task
  task list      List tasks in the cluster
  vm list        List VMs in the cluster
  vm migrate     Migrate VMs in the cluster

  Made with love by Enix.io

pvecontrol works with subcommands for each operation. Operation related to a specific kind of object (tasks for instance) will be grouped into their own subcommand group. Each subcommand has its own help:

$ pvecontrol task get --help
Usage: pvecontrol task get [OPTIONS] UPID

Options:
  -f, --follow  Wait task end
  -w, --wait    Follow task log output
  --help        Show this message and exit.

The -c or --cluster flag is required in order to indicate on which cluster we want to work.

The simplest operation we can do to check that pvecontrol works correctly, and that authentication has been configured properly is status:

$ pvecontrol --cluster my-test-cluster status
INFO:root:Proxmox cluster: my-test-cluster

  Status: healthy
  VMs: 0
  Templates: 0
  Metrics:
    CPU: 0.00/64(0.0%), allocated: 0
    Memory: 0.00 GiB/128.00 GiB(0.0%), allocated: 0.00 GiB
    Disk: 0.00 GiB/2.66 TiB(0.0%)
  Nodes:
    Offline: 0
    Online: 3
    Unknown: 0

If this works, we're good to go!

Environment variables

pvecontrol supports the following environment variables:

  • PVECONTROL_CLUSTER: the default cluster to use when no -c or --cluster option is specified.
  • PVECONTROL_COLOR: if set to False, it will disable all colorized output.
  • PVECONTROL_UNICODE: if set to False, it will disable all unicode output.

Shell completion

pvecontrol provides a completion helper to generate completion configuration for common shells. It currently supports bash, tcsh, and zsh.

You can adapt the following commands to your environment:

# bash
_PVECONTROL_COMPLETE=bash_source pvecontrol > "${BASH_COMPLETION_USER_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/bash-completion}/completions/pvecontrol"
# zsh
_PVECONTROL_COMPLETE=zsh_source pvecontrol > "${HOME}/.zsh/completions/_pvecontrol"
# fish
_PVECONTROL_COMPLETE=fish_source pvecontrol > {$HOME}/.config/fish/completions/pvecontrol.fish

Development

If you want to tinker with the code, all the required dependencies are listed in requirements.txt, and you can install them e.g. with pip:

pip3 install -r requirements.txt -e .

Then you can run the script directly like so:

pvecontrol -h

Contributing

This project use semantic versioning with the python-semantic-release toolkit to automate the release process. All commits must follow the Angular Commit Message Conventions. Repository main branch is also protected to prevent accidental releases. All updates must go through a PR with a review.


Made with ❤️ by Enix (http://enix.io) 🐒 from Paris 🇫🇷.

About

Proxmox VE control CLI

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 8

Languages