Autor Thema: Proxmox / ArchivistaBox  (Gelesen 4582 mal)

0 Mitglieder und 1 Gast betrachten dieses Thema.

Offline SiLæncer

  • Cheff-Cubie
  • *****
  • Beiträge: 191383
  • Ohne Input kein Output
    • DVB-Cube
Proxmox VE 7.2
« Antwort #15 am: 22 Juni, 2022, 18:20 »
Changelog


    Backup/Restore:

        Notes templates: Meta-information can be added via a notes-template for backup jobs, to better distinguish and search for backups. This template is evaluated as soon as the job is executed, and added to any resulting backup. Notes templates can contain template variables like {{guestname}} or {{cluster}}.
        To benefit from the Rust code of Proxmox Backup Server, the Proxmox developers make use of perlmod, a Rust crate which allows exporting Rust modules as Perl packages. perlmod is used by Proxmox to transfer data between Rust and Perl, thus implementing parts of Proxmox VE and Proxmox Mail Gateway in Rust.
        The next-event scheduling code was updated via this Perl-to-Rust-binding (perlmod) and now uses the same code as Proxmox Backup Server. Users can not only specify the existing weekday, time, and time range, but now also a specific date and time (e.g., *-12-31 23:50; New Year's Eve, 10 minutes before midnight every year), date ranges (e.g., Sat *-1..7 15:00; first Saturday every month at 15:00), or repeating ranges (e.g., Sat *-1..7 */30; first Saturday every month, every half hour).
        Some basic restore settings, for example guest name or memory, can now be overwritten in the enhanced backup-restore dialog in the web interface.
        A new ‘job-init’ hook step was added to the backup process. Among other things, it can be used to prepare the backup storage, for example, by starting the storage server.

    High Availability Manager:

        By improving the local resource manager (pve-ha-lrm) scheduler which launches workers, the amount of configurable services that can be handled per single node has increased. This helps in large deployments, as the services at the end of the queue are also checked to ensure that they are still in the target state.
        By introducing a skip-round command to the integrated HA simulator in Proxmox VE, it has become easier to test races in scheduling (on the different nodes).
    Cluster: Regarding the creation of new VMs or containers, version 7.2 allows you to configure a desired range from which the new VMIDs are proposed via the web interface. The lower and upper boundaries can be set in the Datacenter -> Options panel. Setting lower equal to upper disables auto-suggestion completely, meaning the administrator has to manually enter an ID.
    Ceph: Proxmox VE supports Ceph Pacific 16.2.7 and Ceph Octopus 15.2.16 (with continued support until mid 2022). This version now also supports creating and destroying erasure-coded pools, which can be added as Proxmox VE storage entries, and help to reduce the amount of disk space required. A new option in the GUI allows for passing the keyring secrets of external Ceph clusters when adding an RBD or CephFS storage to Proxmox VE.
    Web interface: Further enhancements in the web interface allow for example for safe reassignment of a VM disk or CT volume to another guest on the same node; the reassigned disk/volume can be attached at a different bus/mountpoint on the destination guest. This can help in cases of upgrades, restructuring, or after disaster recovery.
    Management: Many improvements in Proxmox VE 7.2 enable even more convenient management of the system. For example, a particular kernel version can be selected to boot persistently from a running system, through ‘proxmox-boot-tool kernel pin’. The selection can be used either indefinitely or just for the next boot. This eliminates the need to watch the boot process to select the desired kernel version in the bootloader screen.

Further enhancements and Bug fixes

    In the installation ISO, ZFS installs can be configured to use various compression algorithms (e.g., zstd, gzip, etc.).Additionally, the memtest86+ package, a tool aimed at memory failure detection, has been updated to the completely rewritten 6.0b.
    Further improvements have been added to virtual machines (KVM/QEMU); one to highlight is support for the accelerated virtio-gl (VirGL) display driver. For VirtIO and VirGL display types, SPICE is enabled by default. In modern Linux distributions, changing the graphics card to VirGL can significantly increase frames per second (FPS). For Proxmox containers (LXC), many templates have also been refreshed or newly added, such as the NixOS container template.
    The Proxmox VE Android app now provides a simple dark theme and enables it if the system settings are configured to use dark designs. The mobile app also provides an inline console by relaying noVNC for VMs, and xterm.js for containers and the Proxmox VE node shell in the GUI.
    To prevent a network outage during the transition from ifupdown to ifupdown2, the ifupdown package was modified to not stop networking upon its removal.

[close]

http://www.proxmox.com/downloads

Arbeits.- Testrechner :

Intel® Core™ i7-6700 (4 x 3.40 GHz / 4.00 GHz)
16 GB (2 x 8 GB) DDR4 SDRAM 2133 MHz
250 GB SSD Samsung 750 EVO / 1 TB HDD
ZOTAC Geforce GTX 1080TI AMPExtreme Core Edition 11GB GDDR5
MSI Z170A PC Mate Mainboard
DVD-Brenner Laufwerk
Microsoft Windows 10 Home 64Bit

TT S2 3200 ( BDA Treiber 5.0.1.8 ) + Terratec Cinergy 1200 C ( BDA Treiber 4.8.3.1.8 )

Offline SiLæncer

  • Cheff-Cubie
  • *****
  • Beiträge: 191383
  • Ohne Input kein Output
    • DVB-Cube
Proxmox VE 7.4
« Antwort #16 am: 24 März, 2023, 18:20 »
Changelog


    Based on Debian Bullseye (11.6)
    Latest 5.15 Kernel as stable default
    Newer 6.2 kernel as opt-in
    QEMU 7.2
    LXC 5.0.2
    ZFS 2.1.9
    Ceph Quincy 17.2.5
    Ceph Pacific 16.2.11

Highlights

    Proxmox VE now provides a dark theme for the web interface.
    Guests in resource tree can now be sorted by their name, not only VMID.
    The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.
    Added CRM command to the HA manager to switch an online node manually into maintenance mode (without reboot.

Changelog Overview
Enhancements in the web interface (GUI)

    Add a fully-integrated "Proxmox Dark" color theme variant of the long-time Crisp light theme.

    By default, the prefers-color-scheme media query from the Browser/OS will be used to decide the default color scheme.
    Users can override the theme via a newly added Color Theme menu in the user menu.

    Add "Proxmox Dark" color theme to the Proxmox VE reference documentation.

    The prefers-color-scheme media query from the Browser/OS will be used to decide if the light or dark color scheme should be used.
    The new dark theme is also available in the Proxmox VE API Viewer.

    Local storage types that are located on other cluster nodes can be added.

    A node selector was added to the Add Storage wizard for the ZFS, LVM, and LVM-Thin storage types.

    Automatically redirect HTTP requests to HTTPS for convenience.

    This avoids "Connection reset" browser errors that can be confusing, especially after setting up a Proxmox VE host the first time.

    Task logs can now be downloaded directly as text files for further inspection.
    It is now possible to choose the sort-order of the resource tree and to sort guests by name.
    Fix loading of changelogs in case additional package repositories are configured.
    Improve editing of backup jobs:
        Add a filter to the columns of the guest selector.
        Show selected, but non-existing, guests.
    Remove the "Storage View" mode from the resource tree panel.

    This mode only showed the storage of a cluster and did not provide additional information over the folder or server views.

    The Proxmox Backup Server specific columns for verification and encryption status can now be used for sorting in the backup content view of a storage.
    Polish the user experience of the backup schedule simulator by splitting the date and time into two columns and better check the validity of the input fields.
    Improve accessibility for screens with our minimal required display resolution of 720p
        add scrolling overflow handler for the toolbar of the backup job view
        rework the layout of the backup job info window for better space usage and reduce its default size
    Fix search in "Guests without backup" window.
    Node and Datacenter resource summary panels now show the guest tag column by default.
    Show role privileges when adding permissions.
    Allow the use of the `-` character in snapshot names, as the backend has supported this for some time.
    Update the noVNC guest viewer to upstream version 1.4.0.
    Fix overly-strict permission check that prevented users with only the VM.Console privilege from accessing the noVNC console.
    Align permission check for bulk actions with the ones enforced by the API.

    Switch the check from the Sys.PowerMgmt privilege to the correct VM.PowerMgmt one.

    Invalid entries in advanced fields now cause the advanced panel to unfold, providing direct feedback.
    HTML-encode API results before rendering as additional hardening against XSS.
    Fix preselection of tree elements based on the URL after login.
    Fix race condition when switching between the content panel of two storage before one of them hasn't finished loading.
    Metric server: Expose setting the verify-certificate option for InfluxDB as advanced setting
    Replace non-clickable checkbox with icons for backup jobs, APT repositories, and replication jobs.
    Fix error when editing LDAP sync setting and only a single parameter is not set to a non-default value.
    Add missing online-help references for various panels and edit windows.
    Improved translations, among others:
        Arabic
        French
        German
        Italian
        Japanese
        Russian
        Slovenian
        Simplified Chinese

Virtual Machines (KVM/QEMU)

    New QEMU Version 7.2:
        QEMU 7.2 fixes issues with Windows Guests, installed from a German ISO, during installation of the VirtIO drivers.
        Fix crash of VMs with iSCSI disks on a busy target.
        Fix rare hang of VMs with IDE/SATA during disk-related operations like backup and resize.
        Many more changes, see the upstream changelog for details.
    Taking a snapshot of a VM with large disks following a PBS backup occasionally was very slow. This has been fixed (issue #4476).
    Running fsfreeze/fsthaw before starting a backup can now optionally be disabled in the QEMU guest agent options.

    Note: Disabling this option can potentially lead to backups with inconsistent filesystems and should therefore only be disabled if you know what you are doing.

    Cloning or moving a disk of an offline VM now also takes the configured bandwidth limits into consideration (issue #4249).
    Fix an issue with EFI disks on ARM 64 VMs.
    Add safeguards preventing the moving of disks of a VM using io_uring to storage types that have problems with io_uring in some kernel versions.
    General improvements to error reporting. For example, the error messages from query-migrate are added when a migration fails and a configured, but non-existing physical CD-ROM drive, results in a descriptive error message.
    Allow users to destroy a VM even if it's suspended.
    Fix a race-condition when migrating VMs on highly loaded or slower clusters, where the move of the guest's config file to the target node directory might not have been propagated to the target node.
    Rolling back a VM to a snapshot with state (memory) and still selecting to start the VM after the rollback does not cause an error anymore (rollbacks with state result in a running VM).
    Deleting snapshots of running VMs, with a configured TPM on Ceph storages with krbd enabled, is now possible.
    Fix command execution via pvesh and QEMU guest agent in VMs on other cluster nodes.
    Update Linux OS version description to include 6.x kernels.

Containers (LXC)

    Update to LXC 5.0.2 and lxcfs 5.0.3.
    Allow riscv32 and riscv64 container architectures through the binfmt_misc kernel capability.

    After installing the qemu-user-static and binfmt-support packages one can use a RISC-V based rootfs image to run as container directly on an x86_64/amd64 Proxmox VE host.

    Create /etc/hostname file on Alma Linux, CentOS, and Rocky Linux containers. With this, DHCP requests sent by the container now include its hostname.
    Add option to disconnect network interfaces of containers, similarly to network interfaces of VMs.
    Make container start more resilient after OOM or node crash (empty AppArmor profile files do not cause a crash).
    Improve cleanup upon failed restores (remove the container configuration if restore fails due to an invalid source archive, remove firewall configuration).
    Ignore bind or read-only mount points when running pct fstrim.
    During container shutdown, wait with a timeout in case lxc-stop fails. This prevents the shutdown task from running indefinitely and having to be aborted manually.
    Templates:
        Updated Debian Bullseye template from 11.3 to 11.6.
        Updated Proxmox Mail Gateway template from 7.0 to 7.2.

General improvements for virtual guests

    The "Bulk Stop" action was renamed to "Bulk Shutdown" to better describe its behavior.
    Allow overriding timeout and force-stop settings for bulk shutdowns.
    Allow bulk actions even if the user does not have the required privileges for all guests but has the privileges for each guest involved in the bulk action.

HA Manager

    Add CRM command to switch an online node manually into maintenance (without reboot).

    When a node goes into maintenance mode all active HA services will be moved to other nodes, but automatically migrate them back once the maintenance mode is disabled again.

    The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.

    One can now enable the ha-rebalance-on-start option in the datacenter.cfg or via the web UI to use Proxmox CRS to balance on service start up.

    A new intermediate state request_started has been added for the stop -> start transitions of services.
    Improve scheduling algorithm for some cases.
        make CPU load matter more if there is no memory load at all

        avoids boosting tiny relative differences over higher absolute loads.

        use a non-linear averaging algorithm when comparing loads.

        The previous algorithm was blind in cases where the static node stats are the same and there is (at least) one node that is over committed when compared to the others.

Improved management for Proxmox VE clusters

    Ensure that the current working directory is not in /etc/pve when you set up the cluster using the pvecm CLI tool.

    Since pmxcfs, which provides the mount point for /etc/pve, is restarted when you set up the cluster, a confusing "Transport endpoint is not connected" error message would be reported otherwise.

    The proxmox-offline-mirror tool now supports fetching data through an HTTP proxy.
    Fetching the changelog of package updates has been improved:
        The correct changelog will be downloaded if repositories from multiple Proxmox projects are configured, for example if one has Proxmox VE and Proxmox Backup Server installed on the same host.
        Support getting the for packages coming from a Debian Backports repository.
    You can now configure if you want to receive a notification mail for new available package updates.
    The wrapper for acme.sh DNS-validation plugins received fixes for 2 small issues:
        a renaming of parameters for the acmedns plugin was pulled from upstream.
        a missing method was added to fix an issue with the dns_cf.sh plugin.
    Improved pvereport: In order to provide a better status overview, add the following information:
        /etc/pve/datacenter.cfg.
        ceph health detail.
    OpenSSL errors are now reported in full to ease troubleshooting when managing the nodes certificate.
    Add missing or newly added/split-out packages to the Proxmox VE apt version API, also used for the pveversion -v call:
        proxmox-mail-forward
        proxmox-kernel-helper
        libpve-rs-perl

Backup/Restore

    Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.

Storage

    It is now possible to override the specific subdirectories for content (ISOs, container templates, backups, guest disks) to custom values through the content-dirs option.
    The CIFS storage type can now also directly mount a specific subdirectory of a share, thus better integrating into already existing environments.
    The availability check for the NFSv4 storage type was reworked in order to work with setups running without rpcbind.
    Fix ISO upload via HTTP in a few edge cases (newlines in filenames, additional headers, not sent by common browsers).
    Fix caching volume information for systems which both have a local ZFS pool storage and a ZFS over iSCSI storage configured during guest disk rescan.

Storage Replication

    Extend support for online migration of replicated VM guests.

    One can now also migrate VMs if they included snapshots, as long as those are only on replicated volumes.

Disk Management

    Improve showing the SMART values for the correct NVMe devices.

Ceph

    Expose more detailed OSD information through the API and use that to add an OSD Detail window in the web interface.

    You can now check the backing device, logical volume info, front- and back- network addresses and more using the new OSD detail window.

    Show placement groups per OSD in the web interface.
    Improve schema description for various Ceph-related API endpoints.

    This also improves the api-viewer and pvesh tool for various Ceph-related API endpoints.

    Fix broken cmd-safety endpoint that made it impossible for non-root users to stop/destroy OSDs and monitors.
    Allow admins to easily set up multiple MDS per node to increase redundancy if more than one CephFS is configured.

Access Control

    ACL computation was refactored causing a significant performance improvement (up to a factor of 450) on setups with thousands of entries.
    It is now possible to override the remove-vanished settings for a realm when actively syncing it in the GUI.
    Allow quoted values in LDAP DN attributes when setting up an LDAP realm.

Firewall & Software Defined Networking

    ipsets can be added even with set host-bits. For example, 192.0.2.5/24 is now a valid input. Host-bits get cleared upon parsing (resulting in 192.0.2.0/24 in the example).
    Firewall logs can be restricted to a timeframe with the since and until parameters to the API call
    The conditional loading of nf_conntrack_helpers was dropped for compatibility with kernel 6.1.
    Not adding link-local IPv6 addresses on the internal guest-communication devices was fixed in a corner-case.
    The MTU is now set to the value of the parent bridge on the automatically generated VLAN-bridge devices for non-VLAN-aware bridges.
    The EVPN plugin now also merges a defined prefix-list from /etc/frr/frr.conf.local.

Installation ISO

    the version of BusyBox shipped with the ISO was updated to version 1.36.0.
    The EFI System Partition (ESP) defaults to 1 GiB of size if the root disk partition (hdsize) is bigger than 100 GB.
    UTC can now be selected as timezone during installation.

Notable bug fixes

    An issue with OVS network configuration where the node would lose connectivity when upgrading Open vSwitch (see https://bugs.debian.org/1008684).
    A race condition in the API servers causing failed tasks when running a lot of concurrent API requests was fixed.

Known Issues & Breaking Changes

    In QEMU 7.2, it is a hard error if audio initialization fails rather than a warning.

    This can happen, for example, if you have an audio device with SPICE driver configured but are not using SPICE display. To avoid the issue, make sure the configuration is valid.

    With pve-edk2-firmware >= 3.20221111-1 we know of two issues affecting specific set ups:
        virtual machines using OVMF/EFI with very little memory (< 1 GiB) and certain CPU types (e.g. host) might no longer boot.

        Possible workarounds are to assign more memory or to use kvm64 as the CPU type.
        The background for this problem is that OVMF << 3.20221111-1 used to guess the address (bit) width only from the available memory, but now there is more accurate detection that better matches what the configured CPU type provides. The more accurate address-width can lead to a larger space requirement for page tables.

        The (non-default) PVSCSI disk controller might result in SCSI disk not being detected inside the guess in regressions.

        We're still investigating this, until then you might either evaluate if your VM really requires the non-standard PVSCSI controller, use the SATA bus instead, or keep using the older pve-edk2-firmware package.

[close]

http://www.proxmox.com/downloads

Arbeits.- Testrechner :

Intel® Core™ i7-6700 (4 x 3.40 GHz / 4.00 GHz)
16 GB (2 x 8 GB) DDR4 SDRAM 2133 MHz
250 GB SSD Samsung 750 EVO / 1 TB HDD
ZOTAC Geforce GTX 1080TI AMPExtreme Core Edition 11GB GDDR5
MSI Z170A PC Mate Mainboard
DVD-Brenner Laufwerk
Microsoft Windows 10 Home 64Bit

TT S2 3200 ( BDA Treiber 5.0.1.8 ) + Terratec Cinergy 1200 C ( BDA Treiber 4.8.3.1.8 )

Offline SiLæncer

  • Cheff-Cubie
  • *****
  • Beiträge: 191383
  • Ohne Input kein Output
    • DVB-Cube
Proxmox VE 8.0
« Antwort #17 am: 22 Juni, 2023, 21:50 »
Here is a selection of the highlights of the Proxmox VE 8.0 final version

    Debian 12, but using a newer Linux kernel 6.2
    QEMU 8.0.2, LXC 5.0.2, ZFS 2.1.12
    Ceph Server:
    Ceph Quincy 17.2 is the default and comes with continued support.
    There is now an enterprise repository for Ceph which can be accessed via any Proxmox VE subscription, providing the best stability for production systems.
    Additional text-based user interface (TUI) for the installer ISO.
    Integrate host network bridge and VNet access when configuring virtual guests into the ACL system of Proxmox VE.
    Add access realm sync jobs to conveniently synchronize users and groups from an LDAP/AD server automatically at regular intervals.
    New default CPU type for VMs: x86-64-v2-AES
    Resource mappings: between PCI(e) or USB devices, and nodes in a Proxmox VE cluster.
    Countless GUI and API improvements.

As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release Notes: https://pve.proxmox.com/wiki/Roadmap

https://www.proxmox.com/downloads

Arbeits.- Testrechner :

Intel® Core™ i7-6700 (4 x 3.40 GHz / 4.00 GHz)
16 GB (2 x 8 GB) DDR4 SDRAM 2133 MHz
250 GB SSD Samsung 750 EVO / 1 TB HDD
ZOTAC Geforce GTX 1080TI AMPExtreme Core Edition 11GB GDDR5
MSI Z170A PC Mate Mainboard
DVD-Brenner Laufwerk
Microsoft Windows 10 Home 64Bit

TT S2 3200 ( BDA Treiber 5.0.1.8 ) + Terratec Cinergy 1200 C ( BDA Treiber 4.8.3.1.8 )

Offline SiLæncer

  • Cheff-Cubie
  • *****
  • Beiträge: 191383
  • Ohne Input kein Output
    • DVB-Cube
Proxmox VE 8.1
« Antwort #18 am: 24 November, 2023, 20:50 »
Highlights

    Support for Secure Boot: This version is now compatible with Secure Boot. This security feature is designed to protect the boot process of a computer by ensuring that only software with a valid digital signature launches on a machine. Proxmox VE now includes a signed shim bootloader trusted by most hardware's UEFI implementations. This allows installing Proxmox VE in environments with Secure Boot active.

    Software-defined Network (SDN): With this version the core Software-defined Network (SDN) packages are installed by default. The SDN technology in Proxmox VE enables to create virtual zones and networks (VNets), which enables users to effectively manage and control complex networking configurations and multitenancy setups directly from the web interface at the datacenter level. Use cases for SDN range from an isolated private network on each individual node to complex overlay networks across multiple Proxmox VE clusters on different locations. The benefits result in a more responsive and adaptable network infrastructure that can scale according to business needs.

    New Flexible Notification System: This release introduces a new framework that uses a matcher-based approach to route notifications. It lets users designate different target types as recipients of notifications. Alongside the current local Postfix MTA, supported targets include Gotify servers or SMTP servers that require SMTP authentication. Notification matchers determine which targets will get notifications for particular events based on predetermined rules. The new notification system now enables greater flexibility, allowing for more granular definitions of when, where, and how notifications are sent.

    Support for Ceph Reef and Ceph Quincy: Proxmox Virtual Environment 8.1 adds support for Ceph Reef 18.2.0 and continues to support Ceph Quincy 17.2.7. The preferred Ceph version can be selected during the installation process. Ceph Reef brings better defaults improving performance and increased reading speed.


[close]

Release Notes: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.1

https://www.proxmox.com/downloads

Arbeits.- Testrechner :

Intel® Core™ i7-6700 (4 x 3.40 GHz / 4.00 GHz)
16 GB (2 x 8 GB) DDR4 SDRAM 2133 MHz
250 GB SSD Samsung 750 EVO / 1 TB HDD
ZOTAC Geforce GTX 1080TI AMPExtreme Core Edition 11GB GDDR5
MSI Z170A PC Mate Mainboard
DVD-Brenner Laufwerk
Microsoft Windows 10 Home 64Bit

TT S2 3200 ( BDA Treiber 5.0.1.8 ) + Terratec Cinergy 1200 C ( BDA Treiber 4.8.3.1.8 )

Offline SiLæncer

  • Cheff-Cubie
  • *****
  • Beiträge: 191383
  • Ohne Input kein Output
    • DVB-Cube
Proxmox VE 8.2
« Antwort #19 am: 24 April, 2024, 21:45 »
Highlights

    Import Wizard for VMware ESXi VMs: Proxmox VE provides an integrated VM importer presented as storage plugin for native integration into the API and web-based user interface. It offers users the ability to import guests directly from other hypervisors. Currently, it allows to import VMware-based VMs (ESXi and vCenter). You can use this to import the VM as a whole, with most of the original configuration settings mapped to Proxmox VE's configuration model.

    Automated and Unattended Installation: Proxmox offers a new ‘proxmox-auto-install-assistant’ tool that fully automates the setup process on bare-metal. Automated installation allows for the rapid deployment of Proxmox VE hosts without the need for manual access to the systems, saving time and reducing the risk of errors. To use this method, an answer file must be prepared with the necessary configuration settings for the installation process. This file can be provided directly in the ISO, on an additional disk such as a USB flash drive, or over the network. Automated installation is useful in various scenarios, such as deploying large-scale infrastructure, automating the setup process, and ensuring consistent configurations across multiple systems.

    Backup Fleecing: When creating a backup of a running VM, a slow backup target can negatively impact guest IO performance during the backup process. Fleecing can reduce this impact by caching data blocks in a fleecing image rather than sending it directly to the backup target, which can help guest IO performance and even prevent hangs at the cost of requiring more storage space. Backup fleecing is especially beneficial when backing up IO-heavy guests to a remote Proxmox Backup Server or other backup storage with a slow network connection.

    Firewall modernization with nftables (technology preview): Proxmox VE comes with a new firewall implementation that uses nftables instead of iptables. The opt-in feature in tech preview is written in the Rust programming language. Although the new implementation is close to feature parity with the existing one, the nftables firewall must be enabled manually and remains a preview to first gather feedback from the community.

[close]

Release Notes: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2

https://www.proxmox.com/downloads

Arbeits.- Testrechner :

Intel® Core™ i7-6700 (4 x 3.40 GHz / 4.00 GHz)
16 GB (2 x 8 GB) DDR4 SDRAM 2133 MHz
250 GB SSD Samsung 750 EVO / 1 TB HDD
ZOTAC Geforce GTX 1080TI AMPExtreme Core Edition 11GB GDDR5
MSI Z170A PC Mate Mainboard
DVD-Brenner Laufwerk
Microsoft Windows 10 Home 64Bit

TT S2 3200 ( BDA Treiber 5.0.1.8 ) + Terratec Cinergy 1200 C ( BDA Treiber 4.8.3.1.8 )