Automatically including '/usr/share/nftables.d/table-post/20-miniupnpd.nft'
Automatically including '/usr/share/nftables.d/chain-post/dstnat/20-miniupnpd.nft'
Automatically including '/usr/share/nftables.d/chain-post/forward/20-miniupnpd.nft'
Automatically including '/usr/share/nftables.d/chain-post/srcnat/20-miniupnpd.nft'
動作しているか確認
Windows マシンで PowerShell と UPnPCJ を使って動作確認をします。一時的にファイアウォールをオフにしておきます。
IP パケットがフラグメント化されると途中のネットワーク機器でパケットが破棄されて接続できないサーバーが出てきます。以下の設定を追加してパケットサイズを小さくします。
# To avoid IP packet fragment error. Client config must have the same setting
tun-mtu 1500
# Client config should have the below related to packet fragmentation issue
# mssfix 1400
#!/bin/bash
# This script generates inline OpenVPN client configuration files
# for the given client names. It assumes that the client certificate
# and key are stored in separate directories.
#
# Usage:
# ./generate-client-configs.sh client1 client2
# Common files (adjust paths as needed)
CA_FILE="/etc/openvpn/easy-rsa/pki/ca.crt"
TA_FILE="/etc/openvpn/ta.key"
# Directories where client certificates and keys are stored
CERT_DIR="/etc/openvpn/easy-rsa/pki/issued"
KEY_DIR="/etc/openvpn/easy-rsa/pki/private"
# Directory where the output configuration files will be stored
OUTPUT_DIR="./openvpn-client-configs"
# Prefix of client config file name
OUTPUT_FILE_PREFIX="my-openvpn-"
# Server information
SERVER_ADDRESS="example.com"
PORT=11940
# Base configuration template
BASE_CONFIG=$(cat <<EOF
client
dev tun
proto udp
remote ${SERVER_ADDRESS} ${PORT}
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
tun-mtu 1500 # To avoid packet fragmentation. Server config must have the same setting
mssfix 1400 # To avoid packet fragmentation
data-ciphers AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305
auth SHA256
tls-ciphersuites TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
verb 3
EOF
)
# Function to embed file content in inline tags
embed_file() {
local tag="$1"
local file="$2"
echo "<${tag}>"
cat "${file}"
echo "</${tag}>"
echo ""
}
# Check if at least one client name is provided
if [ "$#" -eq 0 ]; then
echo "Usage: $0 client_name1 [client_name2 ...]"
exit 1
fi
# Create the output directory if it doesn't exist
mkdir -p "${OUTPUT_DIR}"
# Generate configuration file for each client provided as argument
for client in "$@"; do
CLIENT_CERT="${CERT_DIR}/${client}.crt"
CLIENT_KEY="${KEY_DIR}/${client}.key"
# Check if all required files exist
for file in "$CA_FILE" "$CLIENT_CERT" "$CLIENT_KEY" "$TA_FILE"; do
if [ ! -f "$file" ]; then
echo "Error: Required file '$file' not found for client '${client}'." >&2
continue 2
fi
done
OUTPUT_FILE="${OUTPUT_DIR}/${OUTPUT_FILE_PREFIX}${client}.ovpn"
# Write the base configuration to the output file
echo "${BASE_CONFIG}" > "${OUTPUT_FILE}"
echo "" >> "${OUTPUT_FILE}"
# Embed certificate and key files inline
{
embed_file "ca" "${CA_FILE}"
embed_file "cert" "${CLIENT_CERT}"
embed_file "key" "${CLIENT_KEY}"
embed_file "tls-auth" "${TA_FILE}"
echo "key-direction 1"
} >> "${OUTPUT_FILE}"
echo "Client config file '${OUTPUT_FILE}' created successfully."
done
Port 2222
LogLevel VERBOSE
PermitRootLogin no
PasswordAuthentication no
MaxAuthTries 3
MaxSessions 1
PubkeyAuthentication yes
PermitEmptyPasswords no
X11Forwarding no
AllowUsers <NEW_USERNAME>
以下のコマンドで LXD をインストールします。ほとんどの質問には Enter キーだけを押してデフォルトを適用します。
sudo snap install lxd
sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]: 800GiB
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
lxc network set lxdbr0 ipv4.firewall false
lxc network set lxdbr0 ipv6.firewall false
ufw コマンドで LXD のブリッジネットワークのルーティングを許可する。
# allow the guest to get an IP from the LXD host
sudo ufw allow in on lxdbr0 to any port 67 proto udp
sudo ufw allow in on lxdbr0 to any port 547 proto udp
# allow the guest to resolve host names from the LXD host
sudo ufw allow in on lxdbr0 to any port 53
# allow the guest to have access to outbound connections
CIDR4="$(lxc network get lxdbr0 ipv4.address | sed 's|\.[0-9]\+/|.0/|')"
CIDR6="$(lxc network get lxdbr0 ipv6.address | sed 's|:[0-9]\+/|:/|')"
sudo ufw route allow in on lxdbr0 from "${CIDR4}"
sudo ufw route allow in on lxdbr0 from "${CIDR6}"
以下のコマンドで Windows 仮想マシンを LXD にインポートできる形式にコンバートします。Linux 仮想マシンも同じコマンドでコンバートできます。
mkdir ./os
sudo virt-v2v --block-driver virtio-scsi -o local -of raw -os ./os -i vmx ./test-vm.vmx
[ 0.0] Setting up the source: -i vmx ./test-vm.vmx
[ 1.0] Opening the source
[ 13.5] Inspecting the source
[ 15.1] Checking for sufficient free disk space in the guest
[ 15.1] Converting Windows 10 Pro to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 24.7] Mapping filesystem data to avoid copying unused and blank areas
[ 25.6] Closing the overlay
[ 25.7] Assigning disks to buses
[ 25.7] Checking if the guest needs BIOS or UEFI to boot
virt-v2v: This guest requires UEFI on the target to boot.
[ 25.7] Setting up the destination: -o disk -os ./os
[ 26.8] Copying disk 1/1
100% [****************************************]
[ 275.0] Creating output metadata
[ 275.0] Finishing off
cd ~/tmp
wget https://github.com/canonical/lxd/releases/latest/download/bin.linux.lxd-migrate.x86_64
chmod u+x ./bin.linux.lxd-migrate.x86_64
マイグレーションツールを実行して仮想マシンをインポートします。
sudo ./bin.linux.lxd-migrate.x86_64
Please provide LXD server URL: https://192.168.1.101:8443
Certificate fingerprint: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
ok (y/n)? y
1) Use a certificate token
2) Use an existing TLS authentication certificate
3) Generate a temporary TLS authentication certificate
Please pick an authentication mechanism above: 3
Your temporary certificate is:
-----BEGIN CERTIFICATE-----
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-----END CERTIFICATE-----
It is recommended to have this certificate be manually added to LXD through `lxc config trust add` on the target server.
Alternatively you could use a pre-defined trust password to add it remotely (use of a trust password can be a security issue).
Would you like to use a trust password? [default=no]: yes
Trust password: [core.trust_password に設定したパスワード]
Remote LXD server:
Hostname: lxd-server
Version: 5.21.1
Would you like to create a container (1) or virtual-machine (2)?: 2
Name of the new instance: test-vm
Please provide the path to a disk, partition, or image file: /path/to/disk/image/test-vm-sda (コンバートしたディスクファイルを指定する)
Does the VM support UEFI Secure Boot? [default=no]:
Instance to be created:
Name: test-vm
Project: default
Type: virtual-machine
Source: /path/to/disk/image/test-vm-sda
Config:
security.secureboot: "false"
Additional overrides can be applied at this stage:
1) Begin the migration with the above configuration
2) Override profile list
3) Set additional configuration options
4) Change instance storage pool or volume size
5) Change instance network
Please pick one of the options above [default=1]: 3
Please specify config keys and values (key=value ...): limits.cpu=2
Instance to be created:
Name: test-vm
Project: default
Type: virtual-machine
Source: /path/to/disk/image/test-vm-sda
Config:
limits.cpu: "2"
security.secureboot: "false"
Additional overrides can be applied at this stage:
1) Begin the migration with the above configuration
2) Override profile list
3) Set additional configuration options
4) Change instance storage pool or volume size
5) Change instance network
Please pick one of the options above [default=1]: 3
Please specify config keys and values (key=value ...): limits.memory=4GB
Instance to be created:
Name: test-vm
Project: default
Type: virtual-machine
Source: /path/to/disk/image/test-vm-sda
Config:
limits.cpu: "2"
limits.memory: 4GB
security.secureboot: "false"
Additional overrides can be applied at this stage:
1) Begin the migration with the above configuration
2) Override profile list
3) Set additional configuration options
4) Change instance storage pool or volume size
5) Change instance network
Please pick one of the options above [default=1]: 4
Please provide the storage pool to use: default
Do you want to change the storage size? [default=no]:
Instance to be created:
Name: test-vm
Project: default
Type: virtual-machine
Source: /path/to/disk/image/test-vm-sda
Storage pool: default
Config:
limits.cpu: "2"
limits.memory: 4GB
security.secureboot: "false"
Additional overrides can be applied at this stage:
1) Begin the migration with the above configuration
2) Override profile list
3) Set additional configuration options
4) Change instance storage pool or volume size
5) Change instance network
Please pick one of the options above [default=1]: 5
Please specify the network to use for the instance: lxdbr0
Instance to be created:
Name: test-vm
Project: default
Type: virtual-machine
Source: /path/to/disk/image/test-vm-sda
Storage pool: default
Network name: lxdbr0
Config:
limits.cpu: "2"
limits.memory: 4GB
security.secureboot: "false"
Additional overrides can be applied at this stage:
1) Begin the migration with the above configuration
2) Override profile list
3) Set additional configuration options
4) Change instance storage pool or volume size
5) Change instance network
Please pick one of the options above [default=1]: 1
Transferring instance: test-vm: 1.03GB (257.25MB/s)
Instance test-vm successfully created