Skip to content

Commit

Permalink
v8.22 (#6591)
Browse files Browse the repository at this point in the history
- Images: All our images are now compressed via xz instead of 7z. These are a little easier to handle, especially on Linux hosts, and many flashing utilities allow to flash zx-compressed images directly to disk, without the need to manually decompress them first. As xz compresses files and no directories, the dedicated README.md and hash text files are not included anymore. The hashes for integrity checks within an archive have no real purpose, as the compression algorithms imply hashes internally (CRC64 in case of xz), which are checked and integrity of the content assed as part of the decompression.
  • Loading branch information
MichaIng authored Sep 14, 2023
1 parent 13dfec1 commit f3e5f09
Show file tree
Hide file tree
Showing 3 changed files with 31 additions and 118 deletions.
115 changes: 21 additions & 94 deletions .build/images/dietpi-build
Original file line number Diff line number Diff line change
Expand Up @@ -210,11 +210,7 @@ fi
# Virtual machine disk conversion
[[ $VMTYPE && $VMTYPE != 'raw' ]] && apackages+=('qemu-utils')

# p7zip vs 7zip package
# shellcheck disable=SC2015
(( $G_DISTRO < 7 )) && apackages+=('p7zip') c7zz='7zr' || apackages+=('7zip') c7zz='7zz'

G_AG_CHECK_INSTALL_PREREQ parted debootstrap dbus systemd-container "${apackages[@]}"
G_AG_CHECK_INSTALL_PREREQ parted debootstrap dbus systemd-container xz-utils "${apackages[@]}"

# Bootstrap archive keyring if missing
if [[ ! -f $keyring ]]
Expand Down Expand Up @@ -248,9 +244,8 @@ then
79) series=6;;
*) :;;
esac
G_EXEC curl -sSfO "https://dietpi.com/downloads/nanopi$series.7z"
G_EXEC "$c7zz" x "nanopi$series.7z"
G_EXEC rm "nanopi$series.7z"
G_EXEC curl -sSfO "https://dietpi.com/downloads/nanopi$series.img.xz"
G_EXEC xz -d "nanopi$series.img.xz"
G_EXEC truncate -s "$(( 140 + $root_size ))M" "nanopi$series.img"
G_EXEC_OUTPUT=1 G_EXEC sgdisk -e "nanopi$series.img"
G_EXEC_OUTPUT=1 G_EXEC eval "sfdisk -fN8 'nanopi$series.img' <<< ',+'"
Expand Down Expand Up @@ -669,11 +664,9 @@ fi
##########################################
# Virtual machines
##########################################
G_EXEC_DESC='Downloading current README.md to pack with image...' G_EXEC curl -sSf "https://raw.githubusercontent.com/$G_GITOWNER/DietPi/$G_GITBRANCH/README.md" -o README.md

# NB: LZMA2 ultra compression requires much memory per thread. 1 GiB is not sufficient for >2 threads, hence use "-mmt2" to limit used CPU threads to "2" on 1 GiB devices with more than two cores.
limit_threads=()
(( $(free -m | mawk '/Mem:/{print $2}') < 1750 && $(nproc) > 2 )) && limit_threads=('-mmt2')
(( $(free -m | mawk '/Mem:/{print $2}') < 1750 && $(nproc) > 2 )) && limit_threads=('-T2')

# Since qemu-img does not support VMDK and VHDX resizing, we need to resize the raw .img. It is usually done as sparse file, hence the actual disk usage does not change.
G_EXEC qemu-img resize "$OUTPUT_IMG_NAME.img" 8G
Expand Down Expand Up @@ -728,25 +721,11 @@ ethernet0.virtualDev = "e1000"
ethernet0.present = "TRUE"
extendedConfigFile = "$image_name.vmxf"
floppy0.present = "FALSE"
_EOF_
G_DIETPI-NOTIFY 2 'Generating hashes to pack with VMware appliance, please wait...'
cat << _EOF_ > hash.txt
FILE: $image_name.vmdk
DATE: $(date)
MD5: $(md5sum "$image_name.vmdk" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.vmdk" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.vmdk" | mawk '{print $1}')
FILE: $image_name.vmx
DATE: $(date)
MD5: $(md5sum "$image_name.vmx" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.vmx" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.vmx" | mawk '{print $1}')
_EOF_
[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating VMware 7-Zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$image_name.7z" "$image_name.vmdk" "$image_name.vmx" hash.txt README.md
G_EXEC_DESC='Creating VMware tar.xz archive' XZ_OPT="-e9 ${limit_threads[*]}" G_EXEC tar -cJf "$image_name.tar.xz" "$image_name.vmdk" "$image_name.vmx"
G_EXEC rm "$image_name.vmdk" "$image_name.vmx"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.7z" && G_EXEC rm "$image_name.7z"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.tar.xz" && G_EXEC rm "$image_name.tar.xz"
fi

####### ESXi #############################
Expand Down Expand Up @@ -870,18 +849,9 @@ _EOF_
[[ $VMTYPE == 'all' ]] || G_EXEC rm "$image_name.vmdk"
G_EXEC rm "$image_name."{ovf,mf}

G_DIETPI-NOTIFY 2 'Generating hashes to pack with ESXi appliance, please wait...'
cat << _EOF_ > hash.txt
FILE: $image_name.ova
DATE: $(date)
MD5: $(md5sum "$image_name.ova" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.ova" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.ova" | mawk '{print $1}')
_EOF_
[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating VirtualBox 7-Zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$image_name.7z" "$image_name.ova" hash.txt README.md
G_EXEC rm "$image_name.ova"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.7z" && G_EXEC rm "$image_name.7z"
G_EXEC_DESC='Creating ESXi xz archive' G_EXEC xz -9e "${limit_threads[@]}" "$image_name.ova"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.ova.xz" && G_EXEC rm "$image_name.ova.xz"
fi

####### VirtualBox #######################
Expand Down Expand Up @@ -1028,18 +998,9 @@ _EOF_
G_EXEC tar -cf "$image_name.ova" "$image_name."{ovf,vmdk,mf}
G_EXEC rm "$image_name."{ovf,vmdk,mf}

G_DIETPI-NOTIFY 2 'Generating hashes to pack with VMware appliance, please wait...'
cat << _EOF_ > hash.txt
FILE: $image_name.ova
DATE: $(date)
MD5: $(md5sum "$image_name.ova" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.ova" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.ova" | mawk '{print $1}')
_EOF_
[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating VirtualBox 7-Zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$image_name.7z" "$image_name.ova" hash.txt README.md
G_EXEC rm "$image_name.ova"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.7z" && G_EXEC rm "$image_name.7z"
G_EXEC_DESC='Creating VirtualBox xz archive' G_EXEC xz -9e "${limit_threads[@]}" "$image_name.ova"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.ova.xz" && G_EXEC rm "$image_name.ova.xz"
fi

####### Hyper-V ##########################
Expand All @@ -1049,18 +1010,9 @@ then
# Convert raw image to VHDX
G_EXEC qemu-img convert -O vhdx "$OUTPUT_IMG_NAME.img" "$image_name.vhdx"

G_DIETPI-NOTIFY 2 'Generating hashes to pack with Hyper-V image, please wait...'
cat << _EOF_ > hash.txt
FILE: $image_name.vhdx
DATE: $(date)
MD5: $(md5sum "$image_name.vhdx" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.vhdx" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.vhdx" | mawk '{print $1}')
_EOF_
[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating Hyper-V 7-Zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$image_name.7z" "$image_name.vhdx" hash.txt README.md
G_EXEC rm "$image_name.vhdx"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.7z" && G_EXEC rm "$image_name.7z"
G_EXEC_DESC='Creating Hyper-V xz archive' G_EXEC xz -9e "${limit_threads[@]}" "$image_name.vhdx"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.vhdx.xz" && G_EXEC rm "$image_name.vhdx.xz"
fi

####### Proxmox ############################
Expand All @@ -1070,18 +1022,13 @@ then
# Convert raw image to QCOW2
G_EXEC qemu-img convert -c -O qcow2 "$OUTPUT_IMG_NAME.img" "$image_name.qcow2"

G_DIETPI-NOTIFY 2 'Generating hashes to pack with Proxmox image, please wait...'
cat << _EOF_ > hash.txt
FILE: $image_name.qcow2
DATE: $(date)
MD5: $(md5sum "$image_name.qcow2" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.qcow2" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.qcow2" | mawk '{print $1}')
_EOF_
# Keep QCOW2 in compression when UTM appliance shall be generated from it as well.
keep=()
[[ $VMTYPE == 'all' ]] && keep=('-k')

[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating Proxmox 7-Zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$image_name.7z" "$image_name.qcow2" hash.txt README.md
[[ $VMTYPE == 'all' ]] || G_EXEC rm "$image_name.qcow2"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.7z" && G_EXEC rm "$image_name.7z"
G_EXEC_DESC='Creating Proxmox xz archive' G_EXEC xz -9e "${limit_threads[@]}" "${keep[@]}" "$image_name.qcow2"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.qcow2.xz" && G_EXEC rm "$image_name.qcow2.xz"
fi

####### UTM ##############################
Expand Down Expand Up @@ -1246,35 +1193,15 @@ _EOF_
<false/>
</dict>
</plist>
_EOF_
G_DIETPI-NOTIFY 2 'Generating hashes to pack with UTM appliance, please wait...'
cat << _EOF_ > hash.txt
FILE: $image_name.utm/Images/data.qcow2
DATE: $(date)
MD5: $(md5sum "$image_name.utm/Images/data.qcow2" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.utm/Images/data.qcow2" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.utm/Images/data.qcow2" | mawk '{print $1}')
FILE: $image_name.utm/config.plist
DATE: $(date)
MD5: $(md5sum "$image_name.utm/config.plist" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.utm/config.plist" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.utm/config.plist" | mawk '{print $1}')
FILE: $image_name.utm/view.plist
DATE: $(date)
MD5: $(md5sum "$image_name.utm/view.plist" | mawk '{print $1}')
SHA1: $(sha1sum "$image_name.utm/view.plist" | mawk '{print $1}')
SHA256: $(sha256sum "$image_name.utm/view.plist" | mawk '{print $1}')
_EOF_
[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating UTM 7-Zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$image_name.7z" "$image_name.utm" hash.txt README.md
G_EXEC_DESC='Creating UTM tar.xz archive' XZ_OPT="-e9 ${limit_threads[*]}" G_EXEC tar -cJf "$image_name.tar.xz" "$image_name.utm"
G_EXEC rm -R "$image_name.utm"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.7z" && G_EXEC rm "$image_name.7z"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$image_name.tar.xz" && G_EXEC rm "$image_name.tar.xz"
fi

# Cleanup
G_EXEC rm hash.txt README.md "$OUTPUT_IMG_NAME.img"
G_EXEC rm "$OUTPUT_IMG_NAME.img"

exit 0
}
31 changes: 8 additions & 23 deletions .build/images/dietpi-imager
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# or use an existing .img file
# or use Clonezilla to generate a bootable installer ISO from drive for x86_64 systems
# - Minimises root partition and filesystem
# - Hashes and 7z's the final image ready for release
# - Compresses the final image ready for release
#////////////////////////////////////

# Import DietPi-Globals ---------------------------------------------------------------
Expand Down Expand Up @@ -289,9 +289,7 @@
Main(){

# Dependencies
local p7zip='7zip' c7zz='7zz'
(( $G_DISTRO < 7 )) && p7zip='p7zip' c7zz='7zr'
G_AG_CHECK_INSTALL_PREREQ parted fdisk zerofree "$p7zip"
G_AG_CHECK_INSTALL_PREREQ parted fdisk zerofree xz-utils

# Skip menu if all inputs are provided via environment variables
if [[ ( $SOURCE_TYPE$FP_SOURCE == 'Drive'?* || $SOURCE_TYPE$FP_SOURCE_IMG == 'Image'?* ) && $FP_ROOT_DEV && $CLONING_TOOL =~ ^(dd|Clonezilla)$ && $OUTPUT_IMG_NAME ]]
Expand Down Expand Up @@ -723,33 +721,20 @@ _EOF_
# Exit now when archive shall be skipped
(( $SKIP_ARCHIVE )) && exit 0

# Generate hashes: MD5, SHA1, SHA256
G_DIETPI-NOTIFY 2 'Generating hashes to pack with image, please wait...'
cat << _EOF_ > hash.txt
FILE: $OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT
DATE: $(date)
MD5: $(md5sum "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT" | mawk '{print $1}')
SHA1: $(sha1sum "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT" | mawk '{print $1}')
SHA256: $(sha256sum "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT" | mawk '{print $1}')
_EOF_
# Download current README
G_EXEC_DESC='Downloading current README.md to pack with image...' G_EXEC curl -sSf "$DIETPI_REPO/README.md" -o README.md

# Generate 7z archive
# Generate xz archive
# NB: LZMA2 ultra compression requires much memory per thread. 1 GiB is not sufficient for >2 threads, hence use "-mmt2" to limit used CPU threads to "2" on 1 GiB devices with more than two cores.
local limit_threads=()
(( $(free -m | mawk '/Mem:/{print $2}') < 1750 && $(nproc) > 2 )) && limit_threads=('-mmt2')
[[ -f $OUTPUT_IMG_NAME.7z ]] && G_EXEC rm "$OUTPUT_IMG_NAME.7z"
(( $(free -m | mawk '/Mem:/{print $2}') < 1750 && $(nproc) > 2 )) && limit_threads=('-T2')
[[ -f $OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT.xz ]] && G_EXEC rm "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT.xz"
[[ ( -t 0 || -t 1 ) && $TERM != 'dumb' ]] && G_EXEC_OUTPUT=1
G_EXEC_DESC='Creating final 7zip archive' G_EXEC "$c7zz" a -mx=9 "${limit_threads[@]}" "$OUTPUT_IMG_NAME.7z" "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT" hash.txt README.md
G_EXEC_DESC='Creating final xz archive' G_EXEC xz -9e "${limit_threads[@]}" "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT"

G_EXEC_NOHALT=1 G_EXEC rm hash.txt README.md
G_DIETPI-NOTIFY 0 "DietPi-Imager has successfully finished.
Final image file: $PWD/$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT
Final 7z archive: $PWD/$OUTPUT_IMG_NAME.7z"
Final xz archive: $PWD/$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT.xz"

# Upload archive automatically if there is an upload.sh in the same directory
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$OUTPUT_IMG_NAME.7z" && G_EXEC rm -R "$OUTPUT_IMG_NAME.7z"
[[ -x 'upload.sh' ]] && G_EXEC_OUTPUT=1 G_EXEC ./upload.sh "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT.xz" && G_EXEC rm -R "$OUTPUT_IMG_NAME.$OUTPUT_IMG_EXT.xz"

}

Expand Down
3 changes: 2 additions & 1 deletion CHANGELOG.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@ New software:
- ADS-B Feeder | Track airplanes using SDRs and feed the data to ADS-B aggregators. Many thanks to @dirkhh for maintaining and implementing this software option: https://github.com/MichaIng/DietPi/pull/6587

Enhancements:
- General | DietPi images are now shipped with a trailing FAT partition which contains dietpi.txt and other config files for easier pre-configuration and automation from Windows and macOS hosts. The partition is removed automatically on first boot, after copying all supported config files/scripts. Related CLI flags have been added to our build scripts: "--add-fat-part" for dietpi-imager and "--no-fat-part" for dietpi-build. Many thanks to @dirkhh for implementing this feature: https://github.com/MichaIng/DietPi/pull/6602
- Images | DietPi images are now shipped with a trailing FAT partition which contains dietpi.txt and other config files for easier pre-configuration and automation from Windows and macOS hosts. The partition is removed automatically on first boot, after copying all supported config files/scripts. Related CLI flags have been added to our build scripts: "--add-fat-part" for dietpi-imager and "--no-fat-part" for dietpi-build. Many thanks to @dirkhh for implementing this feature: https://github.com/MichaIng/DietPi/pull/6602
- Images | All our images are now compressed via xz instead of 7z. These are a little easier to handle, especially on Linux hosts, and many flashing utilities allow to flash zx-compressed images directly to disk, without the need to manually decompress them first. As xz compresses files and no directories, the dedicated README.md and hash text files are not included anymore. The hashes for integrity checks within an archive have no real purpose, as the compression algorithms imply hashes internally (CRC64 in case of xz), which are checked and integrity of the content assed as part of the decompression.
- DietPi-Software | Docker: Enabled for Trixie and RISC-V via "docker.io" package from Debian repository.
- DietPi-Software | Portainer: Enabled for RISC-V as Docker is now supported on RISC-V as well.

Expand Down

0 comments on commit f3e5f09

Please sign in to comment.