Proxmox
Superb Tutorial, however I put some additional notes below
but I would diverge from some of the defaults they demo as they don’t work or there are better and faster alternatives(notes below)
INTRO: VM vs Containers
TLDR: use VMs, ideally go with template setup
- LXC continers does not have cloudInit
- You can migrate VMs between clusters w/o shutting them down;
- Containers are more suitable if you have limited resources and are very efficient on RAM;
- Not all apps will run properly in containers;
- Some vendors have stigma against containers;
- Containers are a pain to setUp with zeroTier if they are unprivileged, SSH will be a pain after switching to priviliged
- Proxmox CLI
qm
commands work only on VMs
Linux VM
Highly recommend running Ubuntu Server, as it has convenient way to enable SSH keys without password from the install and scrape those keys from the GitHub, you don’t have to install essential packes like sude
, curl
, git
like on Debian, and it has a nicer way to handle missing packages.
I trie originally Debian, Parrot OS Arch distro failed to install.
Ubuntu server failed originally because i had BIOS boot mod enabled(which is default, make sure to chage it to UEFI when creating VM)
CloudInit VM templates from scratch (NEW)
https://www.youtube.com/watch?v=MJgIm03Jxdo
- create VM w/o ISO image (
Do not use any media
in OS tab)- don’t need to change BIOS for ubuntu as before, leave as SeaBIOS
- System: tick QEMU Agent
- Disk: delete it
- OPT: Disk: enable ‘Discard’ / SSD Emulation. DO NOT change Bus from SCSI to VirtIO Block (fastest)
- Network: Intel E1000
- CloudInit Config
- Hardware tab: Add -> Cloud-Init -> local-lvm
- ClouInit tab configure including DHCP / IP config. Do not forget to click
Regenerate Image
- via Proxmox shell
# IMPORTANT: download CLOUD VERSION of the img, or qcow2
# ubuntu
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
# or debian
wget https://cloud.debian.org/images/cloud/bookworm/20240507-1740/debian-12-generic-amd64-20240507-1740.qcow2
# of fedora https://fedoraproject.org/cloud/download/
# set machine (900 is example of VMid in proxmox )
qm set 900 --serial0 socket --vga serial0
# rename IMG
mv noble-server-cloudimg-amd64.img ubuntu-24.04.qcow2
# if it is raw, below binght works
qemu-img convert -f raw -O qcow2 lala.raw output.qcow2
# resize
qemu-img resize ubuntu-24.04.qcow2 32G
# import. check your id and if it is local-lvm
qm importdisk 900 ubuntu-24.04.qcow2 local-lvm
- in Hardware tab of VM:
- find UnusedDisk -> edit:
- enable ‘Discard’ for SSD
- check Enable SSD emulation
- OPT: Bus from SCSI to VirtIO Block (fastest)
- find UnusedDisk -> edit:
- Options tab:
- BootOrder - enable and move to 2nd position after CD drive
- disable tablet
pointer
- Right Click on VM -> convert to template
Post clone / start
- install QEMU agent (MUST DO) ```sh sudo apt update sudo apt install qemu-guest-agent
sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agent
## VM Install w/o Template
* [ ] System: BIOS: UEFI for ubuntu (Ubuntu Server wont work with BIOS, Debian will though)
* [ ] System: check QEMU Agent
* [ ] Disk: change Bus from SCSI to VirtIO Block (fastest) and enable 'Discard'
* [ ] CPU: type -> `host`. 1 socket, 4 cores usually
* [ ] Netwrok: Model -> `Intel E1000` if it works for you, otherwise default
## Post Install - VMs
- [ ] wait for CloudInit to configure SSH keys - you will see it in terminal
- [ ] Unmount boot CD(if you installed from CD and not used template) --> VM Hardware
- [ ] basic system update
```sh
cat /proc/cpuinfo
sudo apt update && sudo apt dist-upgrade
- Run post-install script to clean-up pop-up notification
- Install QEMU Agent (IMPORTANT for CloudInit? not sure) - see instructions below
- if
ubuntu
consider ubuntu PRO by running:pro attach
- disable ‘use tablet for pointer’ if you don’t use OS GUI –> VM Options
- Separate Proxmox management network from VM networks
- Enable start VM at boot if required –> Options
- Provision a CD / iso to transfer files between machines
SSH Config
- ensure to disable password login in
/etc/ssh/sshd_config
vi /etc/ssh/sshd_config systemctl restart ssh
- reboot schedule
0 4 * * * root /usr/sbin/reboot
check status
sudo systemctl status ssh
enable root access for users if you want
sudo usermod -aG sudo friendlyantz
Enable QEMU Guest Agent
This will help with sending proper power off / exit commands from Proxmox to the VM:
- Install QEMU Guest Agent
sudo apt install qemu-guest-agent
systemctl status qemu-guest-agent.service # check if running
# -> enable in VM Options
systemctl start qemu-guest-agent.service # if not running
- and enable it in Proxmox VM Options(restart required)
Docker setup (instructions from Ruby on Rails Kamal)
sudo apt update
sudo apt upgrade -y
sudo apt install -y docker.io curl git
sudo usermod -a -G docker friendlyantz
Install ZeroTier(unless you plan to convert it to VM template)
# less secure
curl -s https://install.zerotier.com | sudo bash
# more secure
curl -s 'https://raw.githubusercontent.com/zerotier/ZeroTierOne/master/doc/contact%40www.zerotier.com.gpg' | gpg --import && \
if z=$(curl -s 'https://install.zerotier.com/' | gpg); then echo "$z" | sudo bash; fi
# join your network
sudo zerotier-cli join 12345
# ensure you are connected
zerotier-cli listnetworks
Set static IP for local network instead of DHCP (VMs)
sudo vi /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
ens18:
dhcp4: no
addresses: [192.168.1.10/24]
gateway4: 192.168.1.1
version: 2
🌱new VM from template
- select mode Full vs Linked Clone(VM will die if template gets removed)
Ubuntu - UEFInot required with NEW template method- Disk -> local lvm? other
- check SSH status
sudo systemctl status ssh
# if CloudInit stuffed up hostkey generation:
sudo ssh-keygen -A # all keys
# or
sudo ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ''
# restart
sudo systemctl start ssh
- remove password auth - Cloud-init allows user to login with password by default, so disable it
sudo vi /etc/ssh/sshd_config.d/
sudo vi /etc/ssh/sshd_config
- OPT: UPD hostname
vi /etc/hosts
- remove CloudInit Drive
Convert VM to Templates (OLD / Superseded - DO NOT USE IT, historical notes)
refer new VM template from scratch section
- Sanitize SSH
ls -l /etc/ssh
- Purge SSH keys from /etc/ssh
sudo rm /etc/ssh/ssh_host_* # remove host keys
- ensure
cloud-init
package is already present - this SUPPOSED to generate ssh keys on the install, but it doesn’t work for me, below
apt search cloud-init # check if it already installed
# or
sudo apt install cloud-init # install if not
or do it manually
- Under
/etc/systemd/system/
dir create a file as per this script - change ownership to root
sudo chown root:root regenerate_ssh_host_keys.service
- reavaluate systemd
sudo systemctl daemon-reload
- check if it’s enabled
systemctl status regenerate_ssh_host_keys.service # AND enable it sudo systemctl enable regenerate_ssh_host_keys.service
- Reset Machine ID
# (on Ubuntu) check if it's empty. do not remove
cat /etc/machine-id
# empty it if it's not
sudo truncate -s 0 /etc/machine-id
Check symbolic link is pointing to /etc/machine-id
ls -l /var/lib/dbus/machine-id
# otherwise, link it
sudo ln -s /etc/machine-id /var/lib/dbus/machine-id
- Clean
apt
database and package cache
sudo apt clean
sudo apt autoremove
- Convert VM to template (via Proxmox UI)
- Now power-off the VM and convert it to template
- Replace CD with Cloud-init Drive
- Add Cloud-init Drive –> Hardware tab (keep IDE)
- setup Cloud-init
- Set up default user in cloud init UI tab
- add SSH authorized public keys to cloud-init
- Click
Regenerate Image
- Setup Firewall rules and enable it via Proxmox UI
https://www.youtube.com/watch?v=DNsLLrCgK0U
Containers 🫙
LXC containers - save state Installed from ubuntu / etc server template
- Network: DHCP, or if you want static IP
- IPv4/CIDR:
192.168.1.xx/24
- Gateway:
192.168.1.1
-
Post install - containers
- IPv4/CIDR:
- start automatically
-
set static IP -> via UI (network section)
- check ssh
sudo systemctl status ssh
you might need new key if cloned from template
sudo ssh-keygen -A
sudo systemctl restart ssh
- unprivileged contained - safer (no root account mapping on the host system), but might cause issues
ip a # check ip
- add user ```sh adduser friendlyantz
make him root
usermod -aG sudo friendlyantz
refer VM post install above(make sure to add public ssh key before disabling password)
- !!! **disabling password loging might cause issues for template** - you might need to restart SSHD (server via Proxmox UI CLI)
## ZeroTier on LXC
[forum link to resolve ZT not connecting](https://forum.level1techs.com/t/zerotier-in-lxc-proxmox/155515/11)
- TLDR:
> Essentially you need to give the LXC container the permissions to be able to create TUN devices. My LXC containers are on Proxmox, so my instructions are based on that.
go to config. Change `unprivilaged setting` - this will break SSH
```sh
vi /etc/pve/lxc/XXX.conf # replace XXX with your container ID
# change unpriviliged to 0 (read above about dangers)
unpriviliged: 0
for older Proxmox
lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"
or for Proxmox 7
lxc.cgroup2.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"
Container Templates (Conversion)
sudo apt update && sudo apt dist-upgrade # final update before clean
# ight install these packages
apt update
sudo apt install curl net-tools fzf
sudo apt clean # clean app cache
sudo apt autoremove # remove unused packages
# purge SSH host keys (ONLY IF YOU KEEP PASSWORD LOGIN FOR TEMPLATE, otherwise you might need to restart SSHD server via Proxmox CLI)
cd /etc/ssh/ # remove host keys
sudo rm ssh_host_* # remove host keys (do not leave ssh session after this)
# reset machine ID
cat /etc/machine-id # (on Ubuntu) check if it's empty. do not remove
sudo truncate -s 0 /etc/machine-id # empty it if it's not
sudo poweroff # power off the VM
- Convert Container to template (via Proxmox UI)
Launch Cloned Container
- Full Clone
-
pick purpose-built hostname: Name of container via UI will become the machine name
- after login
sudo apt update && sudo apt dist-upgrade
- there is no cloudInit for Containers so we need to setup ssh
cd /etc/ssh/ # remove host keys if haven't done previously
sudo rm ssh_host_* # remove host keys (do not leave ssh session after this)
sudo dpkg-reconfigure openssh-server # reconfigure ssh server host keys
Proxmox Admin / User Management
In DataCenter tab:
- Create Proxmox VE(PVE) realm users, not Linux PAM users(unless you need SSH and SHell access to Proxmox server)
- Create group
- Add permissions to the group
- Add group to the user
Backups and Snapshots
Snapshots
Are good for testing software, but not for backups. They are stored in the same storage as the VM, so if the storage dies, so does the snapshot.
Backups
Use datacenter to manage and schedule Backups across all VMs, not just individual VM.
Consider between different modes:
- snapshot(not 100% reliable, but still pretty reliable and no downtime)
- stop(reliable, but has downtime)
- suspend(reliable, but has downtime)
Proxmox CLI
For VM only (not CONTAINERS!)
qm list # list all VMs
qm start 100 # start VM with ID 100
qm shutdown 100 # graceful shutdown VM with ID 100
qm reboot 100 # graceful reboot VM with ID 100
qm reset 100 # hard reset VM with ID 100
qm stop 100 # hard stop VM with ID 100
qm set --onboot 0 106 # disable autostart for VM with ID 106
qm config 106 # show config for VM with ID 106
# RTFM
man qm
ZeroTier Exit node ⚡️ (Access physical LAN clients through ZT exit node)
https://www.youtube.com/watch?app=desktop&v=UjWKvBwV0Qs
sudo vi /etc/sysctl.conf
# enable packet forwarding
net.ipv4.ip_forward=1
reloac
sudo sysctl -p
find out interface name
ip r get 8.8.8.8 fibmatch
add chain (replace eth0
with your interface name (after dev))
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifname eth0 masquerade
}
enable nftable service
systemctl enable --now nftables.service
check
zerotier-cli listnetworks
ip -4 -br a
in ZeroTier WebUI: Advance-ManagedRoutes-Add
- Destination
0.0.0.0/0
- Via:
ip of zt machine
In ZT ui on machine: Allow Default Route Override
Networking / Firewall (WIP)
Options and Considerations:
- Internet Router –> Managable Switch –> Server with PFSense AND Access Points for
- separate VLANs / PFSense tags –> Devices
- Secure Net
- Home Automation
- etc
- Internet Router w/o switch. LAN to Proxmox with PFSense
- –>
USB WiFi Dongle to use as Access Point for IoT devices(usb dongle is not recommended and did not work reliably) - Normal internet / WiFi for Secure Net
- –>
- Internet Router with VLANs
- LAN to Proxmox with PFSense to tag VLANs
- VLAN for IoT
- VLAN for Secure Net
- VLAN for Home Automation / etc
- LAN to Proxmox with PFSense to tag VLANs
PFsense
refer [[2024-06-12-pfsense]] page
OpenWRT
https://www.youtube.com/watch?v=3mPbrunpjpk
pct create 280 ./rootfs.tar.xz --unprivileged 1 --ostype unmanaged --hostname openwrt --net0 name=eth0 --net1 name=eth1 --storage local-lvm
vi /etc/pve/lxc/2**.conf
lxc.cgroup2.devices.allow: c10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
Built in Proxmox Firewall
Remove all access unless required. Top-down rule: start at Datacenter –> Cluster / Host –> VM Rules are applied just to that level(i.e Datacenter, Cluster, VM), they are not inherited.
Add new rule (some examples below)
setting / protocol | tcp(for web ui / Proxmox console[note this also enables SSH if you do not specify the port]) | icmp(for ping, etc, OPTIONAL) | tcp for SSH |
---|---|---|---|
Macro | — | — | SSH |
Protocol | tcp | icmp | tcp(not required if using Macro) |
Interface | vmbr0 |
— | — |
Source port | 8006 | — | — |
Destination port | 8006 | — | 22(not required if using Macro) |
Source | — | ip_of_your_machine /32 |
— |
tick enabled
Note: TCP without specified port also enables SSH, so you can use that instead of Macro
Firewall Options
- Enable:
YES
- Input / Output policies
Proxmox Update
https://youtu.be/DzHRhu3On7o?si=9kVxZCSISEuENBUa
periodically run in Proxmox node shell:
apt-get update && apt-get upgrade
External Storage
Tutorials:
- try to find in GUI: Datacenter -> proxmox node -> Disks -> Initialize Disk with GPT
```sh
to find the drive
fdisk -l
or
lsblk
sgdisk -N 1 /dev/sdb # sdb(not sdb1) is GPT master USB disk node(not partition)
2. format using Linux FS and mount
```sh
# format
mkfs -t ext4 /dev/sdb1 # sdb1 is a partition
# mounting
mkdir /mnt/usb_data
ls -l /dev/disk/by-uuid/ # get uuid of disk
vi /etc/fstab
# add this
/dev/disk/by-uuid/your-uuid /mnt/usb_data ext4 defaults 0
# reload
mount -a
systemctl daemon-reload
# check via GUI (disk should should mounted=YES ) and
lsblk # /sdb1 should have a mount point to /mnt/usb_data
- Add storage to proxmox node
- in UI, DataCenter -> Storage tab -> Add -> Directory
- ID: usb_data_wd or whatever
- Dir: /mtn/usb_data
- Content: check all
- Ensure disk in proxmox node is actually the size of your disk(I have issue when i got 100Gb out of 2Tb, retrying this fixed it)
- in UI, DataCenter -> Storage tab -> Add -> Directory
Encryption
cryptsetup luksFormat /dev/sda # encrypt the drive
NAS
TLDR: use TurnkeyFS LXC container with Samba Samba via TurnkeyFS
generic considerations - LXC vs VM vs Host consider TurnKey File Server Container - easyMode
sharing options:
- Samba - good all generic as works on all devices
- Samba via TurnkeyFS - preferred
- NFS - Linuxy specific file sharing
Where to host:
- proxmox datacenter node - le bad
- container - good for samba. can mount existing host drive with some data onto container
- VM - good for NFS
ARR Stack
https://www.youtube.com/watch?v=g24pD3gA_wA
- Torrent clients: - Deluge https://helper-scripts.com/scripts?id=Deluge - qBit
- Sonarr - monitor torrents
Useful resources:
- https://helper-scripts.com/
- https://images.linuxcontainers.org/images
Leave a comment