Updated to latest upstream

This commit is contained in:
Gionatan Danti 2018-12-06 09:18:29 +01:00
commit 17be840590
18 changed files with 962 additions and 119 deletions

View File

@ -1,3 +1,42 @@
2.0.0 [overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0)
[syncoid] added force delete flag (@phreaker0)
[sanoid] removed sleeping between snapshot taking (@phreaker0)
[syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98)
[sanoid] implemented weekly period (@phreaker0)
[syncoid] implemented support for zfs bookmarks as fallback (@phreaker0)
[sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0)
[sanoid] ignore snapshots types that are set to 0 (@muff1nman)
[packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0)
[syncoid] replicate clones (@phreaker0)
[syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0)
[sanoid] added option to defer pruning based on the available pool capacity (@phreaker0)
[sanoid] implemented frequent snapshots with configurable period (@phreaker0)
[syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0)
[packaging] dependency fixes (@rodgerd, mabushey)
[syncoid] implemented support for excluding children of a specific dataset (@phreaker0)
[sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0)
[syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie)
[syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0)
[syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0)
[syncoid] make local source bwlimit work (@phreaker0)
[syncoid] fix 'resume support' detection on FreeBSD (@pit3k)
[sanoid] updated INSTALL with missing dependency
[sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0)
[sanoid] quiet flag suppresses all info output (@martinvw)
[sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis)
[sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0)
[sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0)
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
[syncoid] Added support for ZStandard compression.(@danielewood)
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
[sanoid] use UTC by default in unit template and documentation (@phreaker0)
[syncoid] don't prune snapshots if instructed to not create them either (@phreaker0)
[syncoid] documented compatibility issues with (t)csh shells (@ecoutu)
1.4.18 implemented special character handling and support of ZFS resume/receive tokens by default in syncoid, 1.4.18 implemented special character handling and support of ZFS resume/receive tokens by default in syncoid,
thank you @phreaker0! thank you @phreaker0!

View File

@ -30,4 +30,4 @@ strongly recommends using your distribution's repositories instead.
On Ubuntu: apt install libconfig-inifiles-perl On Ubuntu: apt install libconfig-inifiles-perl
On CentOS: yum install perl-Config-IniFiles On CentOS: yum install perl-Config-IniFiles
On FreeBSD: pkg install p5-Config-Inifiles On FreeBSD: pkg install p5-Config-IniFiles

View File

@ -28,6 +28,7 @@ And its /etc/sanoid/sanoid.conf might look something like this:
############################# #############################
[template_production] [template_production]
frequently = 0
hourly = 36 hourly = 36
daily = 30 daily = 30
monthly = 3 monthly = 3
@ -36,7 +37,7 @@ And its /etc/sanoid/sanoid.conf might look something like this:
autoprune = yes autoprune = yes
``` ```
Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7-spice, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason. Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason.
##### Sanoid Command Line Options ##### Sanoid Command Line Options
@ -92,6 +93,13 @@ Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 da
This prints out quite alot of additional information during a sanoid run, and is normally not needed. This prints out quite alot of additional information during a sanoid run, and is normally not needed.
+ --readonly
Skip creation/deletion of snapshots (Simulate).
+ --help
Show help message.
---------- ----------
@ -172,7 +180,7 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
+ --compress <compression type> + --compress <compression type>
Currently accepted options: gzip, pigz-fast, pigz-slow, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used. Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
+ --source-bwlimit <limit t|g|m|k> + --source-bwlimit <limit t|g|m|k>
@ -210,14 +218,34 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
This argument tells syncoid to not use resumeable zfs send/receive streams. This argument tells syncoid to not use resumeable zfs send/receive streams.
+ --force-delete
Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks.
+ --no-clone-handling
This argument tells syncoid to not recreate clones on the targe on initial sync and doing a normal replication instead.
+ --dumpsnaps + --dumpsnaps
This prints a list of snapshots during the run. This prints a list of snapshots during the run.
+ --no-privilege-elevation
Bypass the root check and assume syncoid has the necessary permissions (for use with ZFS permission delegation).
+ --sshport + --sshport
Allow sync to/from boxes running SSH on non-standard ports. Allow sync to/from boxes running SSH on non-standard ports.
+ --sshcipher
Instruct ssh to use a particular cipher set.
+ --sshoption
Passes option to ssh. This argument can be specified multiple times.
+ --sshkey + --sshkey
Use specified identity file as per ssh -i. Use specified identity file as per ssh -i.
@ -230,6 +258,10 @@ As of 1.4.18, syncoid also automatically supports and enables resume of interrup
This prints out quite alot of additional information during a sanoid run, and is normally not needed. This prints out quite alot of additional information during a sanoid run, and is normally not needed.
+ --help
Show help message.
+ --version + --version
Print the version and exit. Print the version and exit.

View File

@ -1 +1 @@
1.4.18 2.0.0

View File

@ -1,3 +1,46 @@
sanoid (2.0.0) unstable; urgency=medium
[overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0)
[syncoid] added force delete flag (@phreaker0)
[sanoid] removed sleeping between snapshot taking (@phreaker0)
[syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98)
[sanoid] implemented weekly period (@phreaker0)
[syncoid] implemented support for zfs bookmarks as fallback (@phreaker0)
[sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0)
[sanoid] ignore snapshots types that are set to 0 (@muff1nman)
[packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0)
[syncoid] replicate clones (@phreaker0)
[syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0)
[sanoid] added option to defer pruning based on the available pool capacity (@phreaker0)
[sanoid] implemented frequent snapshots with configurable period (@phreaker0)
[syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0)
[packaging] dependency fixes (@rodgerd, mabushey)
[syncoid] implemented support for excluding children of a specific dataset (@phreaker0)
[sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0)
[syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie)
[syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0)
[syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0)
[syncoid] make local source bwlimit work (@phreaker0)
[syncoid] fix 'resume support' detection on FreeBSD (@pit3k)
[sanoid] updated INSTALL with missing dependency
[sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0)
[sanoid] quiet flag suppresses all info output (@martinvw)
[sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis)
[sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0)
[sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0)
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
[syncoid] Added support for ZStandard compression.(@danielewood)
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0)
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
[sanoid] use UTC by default in unit template and documentation (@phreaker0)
[syncoid] don't prune snapshots if instructed to not create them either (@phreaker0)
[syncoid] documented compatibility issues with (t)csh shells (@ecoutu)
-- Jim Salter <github@jrs-s.net> Wed, 04 Dec 2018 18:10:00 -0400
sanoid (1.4.18) unstable; urgency=medium sanoid (1.4.18) unstable; urgency=medium
implemented special character handling and support of ZFS resume/receive tokens by default in syncoid, implemented special character handling and support of ZFS resume/receive tokens by default in syncoid,

View File

@ -16,4 +16,14 @@ override_dh_auto_install:
@mkdir -p $(DESTDIR)/usr/share/doc/sanoid; \ @mkdir -p $(DESTDIR)/usr/share/doc/sanoid; \
cp sanoid.conf $(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example; cp sanoid.conf $(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example;
@mkdir -p $(DESTDIR)/lib/systemd/system; \ @mkdir -p $(DESTDIR)/lib/systemd/system; \
cp debian/sanoid.timer $(DESTDIR)/lib/systemd/system; cp debian/sanoid-prune.service $(DESTDIR)/lib/systemd/system;
override_dh_installinit:
dh_installinit --noscripts
override_dh_systemd_enable:
dh_systemd_enable sanoid.timer
dh_systemd_enable sanoid-prune.service
override_dh_systemd_start:
dh_systemd_start sanoid.timer

View File

@ -0,0 +1,13 @@
[Unit]
Description=Cleanup ZFS Pool
Requires=zfs.target
After=zfs.target sanoid.service
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
[Service]
Environment=TZ=UTC
Type=oneshot
ExecStart=/usr/sbin/sanoid --prune-snapshots
[Install]
WantedBy=sanoid.service

View File

@ -7,4 +7,4 @@ ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
[Service] [Service]
Environment=TZ=UTC Environment=TZ=UTC
Type=oneshot Type=oneshot
ExecStart=/usr/sbin/sanoid --cron ExecStart=/usr/sbin/sanoid --take-snapshots

View File

@ -1,4 +1,4 @@
%global version 1.4.18 %global version 2.0.0
%global git_tag v%{version} %global git_tag v%{version}
# Enable with systemctl "enable sanoid.timer" # Enable with systemctl "enable sanoid.timer"
@ -12,9 +12,9 @@ Summary: A policy-driven snapshot management tool for ZFS file systems
Group: Applications/System Group: Applications/System
License: GPLv3 License: GPLv3
URL: https://github.com/jimsalterjrs/sanoid URL: https://github.com/jimsalterjrs/sanoid
Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz
Requires: perl, mbuffer, lzop, pv Requires: perl, mbuffer, lzop, pv, perl-Config-IniFiles
%if 0%{?_with_systemd} %if 0%{?_with_systemd}
Requires: systemd >= 212 Requires: systemd >= 212
@ -111,6 +111,8 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
%endif %endif
%changelog %changelog
* Wed Dec 04 2018 Christoph Klaffl <christoph@phreaker.eu> - 2.0.0
- Bump to 2.0.0
* Sat Apr 28 2018 Dominic Robinson <github@dcrdev.com> - 1.4.18-1 * Sat Apr 28 2018 Dominic Robinson <github@dcrdev.com> - 1.4.18-1
- Bump to 1.4.18 - Bump to 1.4.18
* Thu Aug 31 2017 Dominic Robinson <github@dcrdev.com> - 1.4.14-2 * Thu Aug 31 2017 Dominic Robinson <github@dcrdev.com> - 1.4.14-2

285
sanoid
View File

@ -4,7 +4,8 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '1.4.18'; $::VERSION = '2.0.0';
my $MINIMUM_DEFAULTS_VERSION = 2;
use strict; use strict;
use warnings; use warnings;
@ -31,6 +32,7 @@ if (keys %args < 2) {
my $pscmd = '/bin/ps'; my $pscmd = '/bin/ps';
my $zfs = '/sbin/zfs'; my $zfs = '/sbin/zfs';
my $zpool = '/sbin/zpool';
my $conf_file = "$args{'configdir'}/sanoid.conf"; my $conf_file = "$args{'configdir'}/sanoid.conf";
my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf"; my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
@ -44,6 +46,7 @@ my $cache = '/var/cache/sanoidsnapshots.txt';
my $cacheTTL = 900; # 15 minutes my $cacheTTL = 900; # 15 minutes
my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate ); my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate );
my %pruned; my %pruned;
my %capacitycache;
my %snapsbytype = getsnapsbytype( \%config, \%snaps ); my %snapsbytype = getsnapsbytype( \%config, \%snaps );
@ -125,15 +128,18 @@ sub monitor_snapshots {
my $path = $config{$section}{'path'}; my $path = $config{$section}{'path'};
push @paths, $path; push @paths, $path;
my @types = ('yearly','monthly','daily','hourly'); my @types = ('yearly','monthly','weekly','daily','hourly','frequently');
foreach my $type (@types) { foreach my $type (@types) {
if ($config{$section}{$type} == 0) { next; }
my $smallerperiod = 0; my $smallerperiod = 0;
# we need to set the period length in seconds first # we need to set the period length in seconds first
if ($type eq 'hourly') { $smallerperiod = 60; } if ($type eq 'frequently') { $smallerperiod = 1; }
elsif ($type eq 'hourly') { $smallerperiod = 60; }
elsif ($type eq 'daily') { $smallerperiod = 60*60; } elsif ($type eq 'daily') { $smallerperiod = 60*60; }
elsif ($type eq 'monthly') { $smallerperiod = 60*60*24; } elsif ($type eq 'weekly') { $smallerperiod = 60*60*24; }
elsif ($type eq 'yearly') { $smallerperiod = 60*60*24; } elsif ($type eq 'monthly') { $smallerperiod = 60*60*24*7; }
elsif ($type eq 'yearly') { $smallerperiod = 60*60*24*31; }
my $typewarn = $type . '_warn'; my $typewarn = $type . '_warn';
my $typecrit = $type . '_crit'; my $typecrit = $type . '_crit';
@ -254,13 +260,19 @@ sub prune_snapshots {
my $path = $config{$section}{'path'}; my $path = $config{$section}{'path'};
my $period = 0; my $period = 0;
if (check_prune_defer($config, $section)) {
if ($args{'verbose'}) { print "INFO: deferring snapshot pruning ($section)...\n"; }
next;
}
foreach my $type (keys %{ $config{$section} }){ foreach my $type (keys %{ $config{$section} }){
unless ($type =~ /ly$/) { next; } unless ($type =~ /ly$/) { next; }
# we need to set the period length in seconds first # we need to set the period length in seconds first
if ($type eq 'hourly') { $period = 60*60; } if ($type eq 'frequently') { $period = 60 * $config{$section}{'frequent_period'}; }
elsif ($type eq 'hourly') { $period = 60*60; }
elsif ($type eq 'daily') { $period = 60*60*24; } elsif ($type eq 'daily') { $period = 60*60*24; }
elsif ($type eq 'weekly') { $period = 60*60*24*7; }
elsif ($type eq 'monthly') { $period = 60*60*24*31; } elsif ($type eq 'monthly') { $period = 60*60*24*31; }
elsif ($type eq 'yearly') { $period = 60*60*24*365.25; } elsif ($type eq 'yearly') { $period = 60*60*24*365.25; }
@ -299,6 +311,17 @@ sub prune_snapshots {
if (! $args{'readonly'}) { if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) { if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1; $pruned{$snap} = 1;
my $dataset = (split '@', $snap)[0];
my $snapname = (split '@', $snap)[1];
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pruning_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
}
} else { } else {
warn "could not remove $snap : $?"; warn "could not remove $snap : $?";
} }
@ -373,7 +396,18 @@ sub take_snapshots {
# to avoid duplicates with DST # to avoid duplicates with DST
my $dateSuffix = ""; my $dateSuffix = "";
if ($type eq 'hourly') { if ($type eq 'frequently') {
my $frequentslice = int($datestamp{'min'} / $config{$section}{'frequent_period'});
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$frequentslice * $config{$section}{'frequent_period'};
push @preferredtime,$datestamp{'hour'};
push @preferredtime,$datestamp{'mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60 * $config{$section}{'frequent_period'}; } # preferred time is later this frequent period - so look at last frequent period
} elsif ($type eq 'hourly') {
push @preferredtime,0; # try to hit 0 seconds push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'hourly_min'}; push @preferredtime,$config{$section}{'hourly_min'};
push @preferredtime,$datestamp{'hour'}; push @preferredtime,$datestamp{'hour'};
@ -420,6 +454,24 @@ sub take_snapshots {
$lastpreferred -= 2*$dstOffset; $lastpreferred -= 2*$dstOffset;
} }
} # preferred time is later today - so look at yesterday's } # preferred time is later today - so look at yesterday's
} elsif ($type eq 'weekly') {
# calculate offset in seconds for the desired weekday
my $offset = 0;
if ($config{$section}{'weekly_wday'} < $datestamp{'wday'}) {
$offset += 7;
}
$offset += $config{$section}{'weekly_wday'} - $datestamp{'wday'};
$offset *= 60*60*24; # full day
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'weekly_min'};
push @preferredtime,$config{$section}{'weekly_hour'};
push @preferredtime,$datestamp{'mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
$lastpreferred += $offset;
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*7; } # preferred time is later this week - so look at last week's
} elsif ($type eq 'monthly') { } elsif ($type eq 'monthly') {
push @preferredtime,0; # try to hit 0 seconds push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'monthly_min'}; push @preferredtime,$config{$section}{'monthly_min'};
@ -438,6 +490,9 @@ sub take_snapshots {
push @preferredtime,$datestamp{'year'}; push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime); $lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31*365.25; } # preferred time is later this year - so look at last year if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31*365.25; } # preferred time is later this year - so look at last year
} else {
warn "WARN: unknown interval type $type in config!";
next;
} }
# reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot # reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot
@ -455,12 +510,39 @@ sub take_snapshots {
if ( (scalar(@newsnaps)) > 0) { if ( (scalar(@newsnaps)) > 0) {
foreach my $snap ( @newsnaps ) { foreach my $snap ( @newsnaps ) {
my $dataset = (split '@', $snap)[0];
my $snapname = (split '@', $snap)[1];
my $presnapshotfailure = 0;
if ($config{$dataset}{'pre_snapshot_script'} and !$args{'readonly'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
if ($args{'verbose'}) { print "executing pre_snapshot_script '".$config{$dataset}{'pre_snapshot_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pre_snapshot_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
if ($ret != 0) {
# warning was already thrown by runscript function
$config{$dataset}{'no_inconsistent_snapshot'} and next;
$presnapshotfailure = 1;
}
}
if ($args{'verbose'}) { print "taking snapshot $snap\n"; } if ($args{'verbose'}) { print "taking snapshot $snap\n"; }
if (!$args{'readonly'}) { if (!$args{'readonly'}) {
system($zfs, "snapshot", "$snap") == 0 system($zfs, "snapshot", "$snap") == 0
or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?"; or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
# make sure we don't end up with multiple snapshots with the same ctime }
sleep 1; if ($config{$dataset}{'post_snapshot_script'} and !$args{'readonly'}) {
if (!$presnapshotfailure or $config{$dataset}{'force_post_snapshot_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
if ($args{'verbose'}) { print "executing post_snapshot_script '".$config{$dataset}{'post_snapshot_script'}."' on dataset '$dataset'\n"; }
runscript('post_snapshot_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
}
} }
} }
$forcecacheupdate = 1; $forcecacheupdate = 1;
@ -492,16 +574,20 @@ sub blabber {
my $path = $config{$section}{'path'}; my $path = $config{$section}{'path'};
print "Filesystem $path has:\n"; print "Filesystem $path has:\n";
print " $snapsbypath{$path}{'numsnaps'} total snapshots "; print " $snapsbypath{$path}{'numsnaps'} total snapshots ";
print "(newest: "; if ($snapsbypath{$path}{'numsnaps'} == 0) {
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60); print "(no current snapshots)"
print "$newest hours old)\n"; } else {
print "(newest: ";
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60);
print "$newest hours old)\n";
foreach my $type (keys %{ $snapsbytype{$path} }){ foreach my $type (keys %{ $snapsbytype{$path} }){
print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n"; print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n";
print " desired: $config{$section}{$type}\n"; print " desired: $config{$section}{$type}\n";
print " newest: "; print " newest: ";
my $newest = sprintf("%.1f",($snapsbytype{$path}{$type}{'newest'} / 60 / 60)); my $newest = sprintf("%.1f",($snapsbytype{$path}{$type}{'newest'} / 60 / 60));
print "$newest hours old, named $snapsbytype{$path}{$type}{'newestname'}\n"; print "$newest hours old, named $snapsbytype{$path}{$type}{'newestname'}\n";
}
} }
print "\n\n"; print "\n\n";
} }
@ -661,10 +747,21 @@ sub init {
tie my %ini, 'Config::IniFiles', ( -file => $conf_file ) or die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!"; tie my %ini, 'Config::IniFiles', ( -file => $conf_file ) or die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!";
# we'll use these later to normalize potentially true and false values on any toggle keys # we'll use these later to normalize potentially true and false values on any toggle keys
my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only'); my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only','skip_children','no_inconsistent_snapshot','force_post_snapshot_script');
my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON"); my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON");
my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF"); my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF");
# check if default configuration file is up to date
my $defaults_version = 1;
if (defined $defaults{'version'}{'version'}) {
$defaults_version = $defaults{'version'}{'version'};
delete $defaults{'version'};
}
if ($defaults_version < $MINIMUM_DEFAULTS_VERSION) {
die "FATAL: you're using sanoid.defaults.conf v$defaults_version, this version of sanoid requires a minimum sanoid.defaults.conf v$MINIMUM_DEFAULTS_VERSION";
}
foreach my $section (keys %ini) { foreach my $section (keys %ini) {
# first up - die with honor if unknown parameters are set in any modules or templates by the user. # first up - die with honor if unknown parameters are set in any modules or templates by the user.
@ -691,10 +788,12 @@ sub init {
# override with values from user-defined default template, if any # override with values from user-defined default template, if any
foreach my $key (keys %{$ini{'template_default'}}) { foreach my $key (keys %{$ini{'template_default'}}) {
if (! ($key =~ /template|recursive/)) { if ($key =~ /template|recursive/) {
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; } warn "ignored key '$key' from user-defined default template.\n";
$config{$section}{$key} = $ini{'template_default'}{$key}; next;
} }
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; }
$config{$section}{$key} = $ini{'template_default'}{$key};
} }
} }
@ -708,17 +807,19 @@ sub init {
my $template = 'template_'.$rawtemplate; my $template = 'template_'.$rawtemplate;
foreach my $key (keys %{$ini{$template}}) { foreach my $key (keys %{$ini{$template}}) {
if (! ($key =~ /template|recursive/)) { if ($key =~ /template|recursive/) {
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; } warn "ignored key '$key' from '$rawtemplate' template.\n";
$config{$section}{$key} = $ini{$template}{$key}; next;
} }
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
$config{$section}{$key} = $ini{$template}{$key};
} }
} }
} }
# override with any locally set values in the module itself # override with any locally set values in the module itself
foreach my $key (keys %{$ini{$section}} ) { foreach my $key (keys %{$ini{$section}} ) {
if (! ($key =~ /template|recursive/)) { if (! ($key =~ /template|recursive|skip_children/)) {
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value directly set in module.\n"; } if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value directly set in module.\n"; }
$config{$section}{$key} = $ini{$section}{$key}; $config{$section}{$key} = $ini{$section}{$key};
} }
@ -742,11 +843,20 @@ sub init {
} }
# how 'bout some recursion? =) # how 'bout some recursion? =)
my $recursive = $ini{$section}{'recursive'} && grep( /^$ini{$section}{'recursive'}$/, @istrue );
my $skipChildren = $ini{$section}{'skip_children'} && grep( /^$ini{$section}{'skip_children'}$/, @istrue );
my @datasets; my @datasets;
if ($ini{$section}{'recursive'}) { if ($recursive || $skipChildren) {
@datasets = getchilddatasets($config{$section}{'path'}); @datasets = getchilddatasets($config{$section}{'path'});
foreach my $dataset(@datasets) { DATASETS: foreach my $dataset(@datasets) {
chomp $dataset; chomp $dataset;
if ($skipChildren) {
if ($args{'debug'}) { print "DEBUG: ignoring $dataset.\n"; }
delete $config{$dataset};
next DATASETS;
}
foreach my $key (keys %{$config{$section}} ) { foreach my $key (keys %{$config{$section}} ) {
if (! ($key =~ /template|recursive|children_only/)) { if (! ($key =~ /template|recursive|children_only/)) {
if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; } if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; }
@ -872,7 +982,7 @@ sub check_zpool() {
exit $ERRORS{$state}; exit $ERRORS{$state};
} }
my $statcommand="/sbin/zpool list -o name,size,cap,health,free $pool"; my $statcommand="$zpool list -o name,size,cap,health,free $pool";
if (! open STAT, "$statcommand|") { if (! open STAT, "$statcommand|") {
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n"); print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
@ -920,7 +1030,7 @@ sub check_zpool() {
## flag to detect section of zpool status involving our zpool ## flag to detect section of zpool status involving our zpool
my $poolfind=0; my $poolfind=0;
$statcommand="/sbin/zpool status $pool"; $statcommand="$zpool status $pool";
if (! open STAT, "$statcommand|") { if (! open STAT, "$statcommand|") {
$state = 'CRITICAL'; $state = 'CRITICAL';
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n"); print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
@ -974,7 +1084,7 @@ sub check_zpool() {
} }
## other cases ## other cases
my ($dev, $sta) = /^\s+(\S+)\s+(\S+)/; my ($dev, $sta, $read, $write, $cksum) = /^\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/;
if (!defined($sta)) { if (!defined($sta)) {
# cache and logs are special and don't have a status # cache and logs are special and don't have a status
@ -994,8 +1104,21 @@ sub check_zpool() {
## no display for verbose level 1 ## no display for verbose level 1
next if ($verbose==1); next if ($verbose==1);
## don't display working devices for verbose level 2 ## don't display working devices for verbose level 2
next if ($verbose==2 && $state eq "OK"); if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE")) {
next if ($verbose==2 && ($sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE")); # check for io/checksum errors
my @vdeverr = ();
if ($read != 0) { push @vdeverr, "read" };
if ($write != 0) { push @vdeverr, "write" };
if ($cksum != 0) { push @vdeverr, "cksum" };
if (scalar @vdeverr) {
$dmge=$dmge . "(" . $dev . ":" . join(", ", @vdeverr) . " errors) ";
if ($state eq "OK") { $state = "WARNING" };
}
next;
}
## show everything else ## show everything else
if (/^\s{3}(\S+)/) { if (/^\s{3}(\S+)/) {
@ -1015,7 +1138,7 @@ sub check_zpool() {
return ($ERRORS{$state},$msg); return ($ERRORS{$state},$msg);
} # end check_zpool() } # end check_zpool()
sub check_capacity_limit() { sub check_capacity_limit {
my $value = shift; my $value = shift;
if (!defined($value) || $value !~ /^\d+\z/) { if (!defined($value) || $value !~ /^\d+\z/) {
@ -1038,7 +1161,7 @@ sub check_zpool_capacity() {
my $capacitylimitsref=shift; my $capacitylimitsref=shift;
my %capacitylimits=%$capacitylimitsref; my %capacitylimits=%$capacitylimitsref;
my $statcommand="/sbin/zpool list -H -o cap $pool"; my $statcommand="$zpool list -H -o cap $pool";
if (! open STAT, "$statcommand|") { if (! open STAT, "$statcommand|") {
print ("$state '$statcommand' command returns no result!\n"); print ("$state '$statcommand' command returns no result!\n");
@ -1083,6 +1206,60 @@ sub check_zpool_capacity() {
return ($ERRORS{$state},$msg); return ($ERRORS{$state},$msg);
} # end check_zpool_capacity() } # end check_zpool_capacity()
sub check_prune_defer {
my ($config, $section) = @_;
my $limit = $config{$section}{"prune_defer"};
if (!check_capacity_limit($limit)) {
die "ERROR: invalid prune_defer limit!\n";
}
if ($limit eq 0) {
return 0;
}
my @parts = split /\//, $section, 2;
my $pool = $parts[0];
if (exists $capacitycache{$pool}) {
} else {
$capacitycache{$pool} = get_zpool_capacity($pool);
}
if ($limit < $capacitycache{$pool}) {
return 0;
}
return 1;
}
sub get_zpool_capacity {
my $pool = shift;
my $statcommand="$zpool list -H -o cap $pool";
if (! open STAT, "$statcommand|") {
die "ERROR: '$statcommand' command returns no result!\n";
}
my $line = <STAT>;
close(STAT);
chomp $line;
my @row = split(/ +/, $line);
my $cap=$row[0];
## check for valid capacity value
if ($cap !~ m/^[0-9]{1,3}%$/ ) {
die "ERROR: '$statcommand' command returned invalid capacity value ($cap)!\n";
}
$cap =~ s/\D//g;
return $cap;
}
###################################################################################################### ######################################################################################################
###################################################################################################### ######################################################################################################
###################################################################################################### ######################################################################################################
@ -1244,6 +1421,9 @@ sub getchilddatasets {
my @children = <FH>; my @children = <FH>;
close FH; close FH;
# parent dataset is the first element
shift @children;
return @children; return @children;
} }
@ -1296,6 +1476,41 @@ sub removecachedsnapshots {
undef %pruned; undef %pruned;
} }
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub runscript {
my $key=shift;
my $dataset=shift;
my $timeout=$config{$dataset}{'script_timeout'};
my $ret;
eval {
if ($timeout gt 0) {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm $timeout;
}
$ret = system($config{$dataset}{$key});
alarm 0;
};
if ($@) {
if ($@ eq "alarm\n") {
warn "WARN: $key didn't finish in the allowed time!";
} else {
warn "CRITICAL ERROR: $@";
}
return -1;
} else {
if ($ret != 0) {
warn "WARN: $key failed, $?";
}
}
return $ret;
}
__END__ __END__
=head1 NAME =head1 NAME

View File

@ -40,6 +40,7 @@
daily = 60 daily = 60
[template_production] [template_production]
frequently = 0
hourly = 36 hourly = 36
daily = 30 daily = 30
monthly = 3 monthly = 3
@ -49,6 +50,7 @@
[template_backup] [template_backup]
autoprune = yes autoprune = yes
frequently = 0
hourly = 30 hourly = 30
daily = 90 daily = 90
monthly = 12 monthly = 12
@ -67,6 +69,21 @@
daily_warn = 48 daily_warn = 48
daily_crit = 60 daily_crit = 60
[template_scripts]
### dataset and snapshot name will be supplied as environment variables
### for all pre/post/prune scripts ($SANOID_TARGET, $SANOID_SNAPNAME)
### run script before snapshot
pre_snapshot_script = /path/to/script.sh
### run script after snapshot
post_snapshot_script = /path/to/script.sh
### run script after pruning snapshot
pruning_script = /path/to/script.sh
### don't take an inconsistent snapshot (skip if pre script fails)
#no_inconsistent_snapshot = yes
### run post_snapshot_script when pre_snapshot_script is failing
#force_post_snapshot_script = yes
### limit allowed execution time of scripts before continuing (<= 0: infinite)
script_timeout = 5
[template_ignore] [template_ignore]
autoprune = no autoprune = no

View File

@ -5,6 +5,8 @@
# # # #
# you have been warned. # # you have been warned. #
################################################################################### ###################################################################################
[version]
version = 2
[template_default] [template_default]
@ -15,6 +17,26 @@ path =
recursive = recursive =
use_template = use_template =
process_children_only = process_children_only =
skip_children =
pre_snapshot_script =
post_snapshot_script =
pruning_script =
script_timeout = 5
no_inconsistent_snapshot =
force_post_snapshot_script =
# for snapshots shorter than one hour, the period duration must be defined
# in minutes. Because they are executed within a full hour, the selected
# value should divide 60 minutes without remainder so taken snapshots
# are apart in equal intervals. Values larger than 59 aren't practical
# as only one snapshot will be taken on each full hour in this case.
# examples:
# frequent_period = 15 -> four snapshot each hour 15 minutes apart
# frequent_period = 5 -> twelve snapshots each hour 5 minutes apart
# frequent_period = 45 -> two snapshots each hour with different time gaps
# between them: 45 minutes and 15 minutes in this case
frequent_period = 15
# If any snapshot type is set to 0, we will not take snapshots for it - and will immediately # If any snapshot type is set to 0, we will not take snapshots for it - and will immediately
# prune any of those type snapshots already present. # prune any of those type snapshots already present.
@ -22,11 +44,15 @@ process_children_only =
# Otherwise, if autoprune is set, we will prune any snapshots of that type which are older # Otherwise, if autoprune is set, we will prune any snapshots of that type which are older
# than (setting * periodicity) - so if daily = 90, we'll prune any dailies older than 90 days. # than (setting * periodicity) - so if daily = 90, we'll prune any dailies older than 90 days.
autoprune = yes autoprune = yes
frequently = 0
hourly = 48 hourly = 48
daily = 90 daily = 90
weekly = 0
monthly = 6 monthly = 6
yearly = 0 yearly = 0
min_percent_free = 10 # pruning can be skipped based on the used capacity of the pool
# (0: always prune, 1-100: only prune if used capacity is greater than this value)
prune_defer = 0
# We will automatically take snapshots if autosnap is on, at the desired times configured # We will automatically take snapshots if autosnap is on, at the desired times configured
# below (or immediately, if we don't have one since the last preferred time for that type). # below (or immediately, if we don't have one since the last preferred time for that type).
@ -40,6 +66,10 @@ hourly_min = 0
# daily - at 23:59 (most people expect a daily to contain everything done DURING that day) # daily - at 23:59 (most people expect a daily to contain everything done DURING that day)
daily_hour = 23 daily_hour = 23
daily_min = 59 daily_min = 59
# weekly -at 23:30 each Monday
weekly_wday = 1
weekly_hour = 23
weekly_min = 30
# monthly - immediately at the beginning of the month (ie 00:00 of day 1) # monthly - immediately at the beginning of the month (ie 00:00 of day 1)
monthly_mday = 1 monthly_mday = 1
monthly_hour = 0 monthly_hour = 0
@ -62,12 +92,16 @@ yearly_min = 0
monitor = yes monitor = yes
monitor_dont_warn = no monitor_dont_warn = no
monitor_dont_crit = no monitor_dont_crit = no
frequently_warn = 0
frequently_crit = 0
hourly_warn = 90 hourly_warn = 90
hourly_crit = 360 hourly_crit = 360
daily_warn = 28 daily_warn = 28
daily_crit = 32 daily_crit = 32
monthly_warn = 32 weekly_warn = 0
monthly_crit = 35 weekly_crit = 0
monthly_warn = 5
monthly_crit = 6
yearly_warn = 0 yearly_warn = 0
yearly_crit = 0 yearly_crit = 0

386
syncoid
View File

@ -4,7 +4,7 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '1.4.18'; $::VERSION = '2.0.0';
use strict; use strict;
use warnings; use warnings;
@ -19,7 +19,8 @@ use Sys::Hostname;
my %args = ('sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => ''); my %args = ('sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => '');
GetOptions(\%args, "no-command-checks", "monitor-version", "compress=s", "dumpsnaps", "recursive|r", GetOptions(\%args, "no-command-checks", "monitor-version", "compress=s", "dumpsnaps", "recursive|r",
"source-bwlimit=s", "target-bwlimit=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@", "source-bwlimit=s", "target-bwlimit=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@",
"debug", "quiet", "no-stream", "no-sync-snap", "no-resume", "exclude=s@", "skip-parent", "identifier=s", "no-clone-rollback", "no-rollback") or pod2usage(2); "debug", "quiet", "no-stream", "no-sync-snap", "no-resume", "exclude=s@", "skip-parent", "identifier=s",
"no-clone-handling", "no-privilege-elevation", "force-delete" "no-clone-rollback", "no-rollback") or pod2usage(2);
my %compressargs = %{compressargset($args{'compress'} || 'default')}; # Can't be done with GetOptions arg, as default still needs to be set my %compressargs = %{compressargset($args{'compress'} || 'default')}; # Can't be done with GetOptions arg, as default still needs to be set
@ -104,17 +105,59 @@ my $exitcode = 0;
## replication ## ## replication ##
if (!defined $args{'recursive'}) { if (!defined $args{'recursive'}) {
syncdataset($sourcehost, $sourcefs, $targethost, $targetfs); syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef);
} else { } else {
if ($debug) { print "DEBUG: recursive sync of $sourcefs.\n"; } if ($debug) { print "DEBUG: recursive sync of $sourcefs.\n"; }
my @datasets = getchilddatasets($sourcehost, $sourcefs, $sourceisroot); my @datasets = getchilddatasets($sourcehost, $sourcefs, $sourceisroot);
foreach my $dataset(@datasets) {
my @deferred;
foreach my $datasetProperties(@datasets) {
my $dataset = $datasetProperties->{'name'};
my $origin = $datasetProperties->{'origin'};
if ($origin eq "-" || defined $args{'no-clone-handling'}) {
$origin = undef;
} else {
# check if clone source is replicated too
my @values = split(/@/, $origin, 2);
my $srcdataset = $values[0];
my $found = 0;
foreach my $datasetProperties(@datasets) {
if ($datasetProperties->{'name'} eq $srcdataset) {
$found = 1;
last;
}
}
if ($found == 0) {
# clone source is not replicated, do a full replication
$origin = undef;
} else {
# clone source is replicated, defer until all non clones are replicated
push @deferred, $datasetProperties;
next;
}
}
$dataset =~ s/\Q$sourcefs\E//; $dataset =~ s/\Q$sourcefs\E//;
chomp $dataset; chomp $dataset;
my $childsourcefs = $sourcefs . $dataset; my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset; my $childtargetfs = $targetfs . $dataset;
# print "syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); \n"; # print "syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); \n";
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
}
# replicate cloned datasets and if this is the initial run, recreate them on the target
foreach my $datasetProperties(@deferred) {
my $dataset = $datasetProperties->{'name'};
my $origin = $datasetProperties->{'origin'};
$dataset =~ s/\Q$sourcefs\E//;
chomp $dataset;
my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset;
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
} }
} }
@ -147,37 +190,51 @@ sub getchilddatasets {
$fsescaped = escapeshellparam($fsescaped); $fsescaped = escapeshellparam($fsescaped);
} }
my $getchildrencmd = "$rhost $mysudocmd $zfscmd list -o name -t filesystem,volume -Hr $fsescaped |"; my $getchildrencmd = "$rhost $mysudocmd $zfscmd list -o name,origin -t filesystem,volume -Hr $fsescaped |";
if ($debug) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; } if ($debug) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; }
open FH, $getchildrencmd; if (! open FH, $getchildrencmd) {
my @children = <FH>; die "ERROR: list command failed!\n";
close FH;
if (defined $args{'skip-parent'}) {
# parent dataset is the first element
shift @children;
} }
if (defined $args{'exclude'}) { my @children;
my $excludes = $args{'exclude'}; my $first = 1;
foreach (@$excludes) {
for my $i ( 0 .. $#children ) { DATASETS: while(<FH>) {
if ($children[$i] =~ /$_/) { chomp;
if ($debug) { print "DEBUG: excluded $children[$i] because of $_\n"; }
undef $children[$i] if (defined $args{'skip-parent'} && $first eq 1) {
# parent dataset is the first element
$first = 0;
next;
}
my ($dataset, $origin) = /^([^\t]+)\t([^\t]+)/;
if (defined $args{'exclude'}) {
my $excludes = $args{'exclude'};
foreach (@$excludes) {
print("$dataset\n");
if ($dataset =~ /$_/) {
if ($debug) { print "DEBUG: excluded $dataset because of $_\n"; }
next DATASETS;
} }
} }
@children = grep{ defined }@children;
} }
my %properties;
$properties{'name'} = $dataset;
$properties{'origin'} = $origin;
push @children, \%properties;
} }
close FH;
return @children; return @children;
} }
sub syncdataset { sub syncdataset {
my ($sourcehost, $sourcefs, $targethost, $targetfs) = @_; my ($sourcehost, $sourcefs, $targethost, $targetfs, $origin, $skipsnapshot) = @_;
my $sourcefsescaped = escapeshellparam($sourcefs); my $sourcefsescaped = escapeshellparam($sourcefs);
my $targetfsescaped = escapeshellparam($targetfs); my $targetfsescaped = escapeshellparam($targetfs);
@ -253,7 +310,7 @@ sub syncdataset {
print "\n\n\n"; print "\n\n\n";
} }
if (!defined $args{'no-sync-snap'}) { if (!defined $args{'no-sync-snap'} && !defined $skipsnapshot) {
# create a new syncoid snapshot on the source filesystem. # create a new syncoid snapshot on the source filesystem.
$newsyncsnap = newsyncsnap($sourcehost,$sourcefs,$sourceisroot); $newsyncsnap = newsyncsnap($sourcehost,$sourcefs,$sourceisroot);
if (!$newsyncsnap) { if (!$newsyncsnap) {
@ -311,11 +368,25 @@ sub syncdataset {
my $sendcmd = "$sourcesudocmd $zfscmd send $sourcefsescaped\@$oldestsnapescaped"; my $sendcmd = "$sourcesudocmd $zfscmd send $sourcefsescaped\@$oldestsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs $forcedrecv $targetfsescaped"; my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs $forcedrecv $targetfsescaped";
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap",0,$sourceisroot); my $pvsize;
if (defined $origin) {
my $originescaped = escapeshellparam($origin);
$sendcmd = "$sourcesudocmd $zfscmd send -i $originescaped $sourcefsescaped\@$oldestsnapescaped";
my $streamargBackup = $args{'streamarg'};
$args{'streamarg'} = "-i";
$pvsize = getsendsize($sourcehost,$origin,"$sourcefs\@$oldestsnap",$sourceisroot);
$args{'streamarg'} = $streamargBackup;
} else {
$pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap",0,$sourceisroot);
}
my $disp_pvsize = readablebytes($pvsize); my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; } if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { if (!$quiet) {
if (defined $origin) {
print "INFO: Clone is recreated on target $targetfs based on $origin\n";
}
if (!defined ($args{'no-stream'}) ) { if (!defined ($args{'no-stream'}) ) {
print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n"; print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
} else { } else {
@ -402,7 +473,7 @@ sub syncdataset {
# a resumed transfer will only be done to the next snapshot, # a resumed transfer will only be done to the next snapshot,
# so do an normal sync cycle # so do an normal sync cycle
return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs); return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef);
} }
# find most recent matching snapshot and do an -I # find most recent matching snapshot and do an -I
@ -416,11 +487,73 @@ sub syncdataset {
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used'); my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
my $matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, $targetsize, \%snaps); my $bookmark = 0;
my $bookmarkcreation = 0;
my $matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps);
if (! $matchingsnap) { if (! $matchingsnap) {
# no matching snapshot; we whined piteously already, but let's go ahead and return false # no matching snapshots, check for bookmarks as fallback
# now in case more child datasets need replication. my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot);
return 0;
# check for matching guid of source bookmark and target snapshot (oldest first)
foreach my $snap ( sort { $snaps{'target'}{$b}{'creation'}<=>$snaps{'target'}{$a}{'creation'} } keys %{ $snaps{'target'} }) {
my $guid = $snaps{'target'}{$snap}{'guid'};
if (defined $bookmarks{$guid}) {
# found a match
$bookmark = $bookmarks{$guid}{'name'};
$bookmarkcreation = $bookmarks{$guid}{'creation'};
$matchingsnap = $snap;
last;
}
}
if (! $bookmark) {
if ($args{'force-delete'}) {
if (!$quiet) { print "Removing $targetfs because no matching snapshots were found\n"; }
my $rcommand = '';
my $mysudocmd = '';
my $targetfsescaped = escapeshellparam($targetfs);
if ($targethost ne '') { $rcommand = "$sshcmd $targethost"; }
if (!$targetisroot) { $mysudocmd = $sudocmd; }
my $prunecmd = "$mysudocmd $zfscmd destroy -r $targetfsescaped; ";
if ($targethost ne '') {
$prunecmd = escapeshellparam($prunecmd);
}
my $ret = system("$rcommand $prunecmd");
if ($ret != 0) {
warn "WARNING: $rcommand $prunecmd failed: $?";
} else {
# redo sync and skip snapshot creation (already taken)
return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1);
}
}
# if we got this far, we failed to find a matching snapshot/bookmark.
if ($exitcode < 2) { $exitcode = 2; }
print "\n";
print "CRITICAL ERROR: Target $targetfs exists but has no snapshots matching with $sourcefs!\n";
print " Replication to target would require destroying existing\n";
print " target. Cowardly refusing to destroy your existing target.\n\n";
# experience tells me we need a mollyguard for people who try to
# zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ...
if ( $targetsize < (64*1024*1024) ) {
print " NOTE: Target $targetfs dataset is < 64MB used - did you mistakenly run\n";
print " \`zfs create $args{'target'}\` on the target? ZFS initial\n";
print " replication must be to a NON EXISTENT DATASET, which will\n";
print " then be CREATED BY the initial replication process.\n\n";
}
# return false now in case more child datasets need replication.
return 0;
}
} }
# make sure target is (still) not currently in receive. # make sure target is (still) not currently in receive.
@ -450,20 +583,87 @@ sub syncdataset {
system ("$targetsudocmd $zfscmd rollback $rollbacktype $targetfsescaped\@$matchingsnapescaped"); system ("$targetsudocmd $zfscmd rollback $rollbacktype $targetfsescaped\@$matchingsnapescaped");
} }
} }
<<<<<<< HEAD
my $sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefsescaped\@$matchingsnapescaped $sourcefsescaped\@$newsyncsnapescaped"; my $sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefsescaped\@$matchingsnapescaped $sourcefsescaped\@$newsyncsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs $forcedrecv $targetfsescaped"; my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs $forcedrecv $targetfsescaped";
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot); my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
my $disp_pvsize = readablebytes($pvsize); my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; } if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
=======
if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; } my $nextsnapshot = 0;
if ($debug) { print "DEBUG: $synccmd\n"; }
system("$synccmd") == 0 or do { if ($bookmark) {
warn "CRITICAL ERROR: $synccmd failed: $?"; my $bookmarkescaped = escapeshellparam($bookmark);
if ($exitcode < 2) { $exitcode = 2; }
return 0; if (!defined $args{'no-stream'}) {
}; # if intermediate snapshots are needed we need to find the next oldest snapshot,
# do an replication to it and replicate as always from oldest to newest
# because bookmark sends doesn't support intermediates directly
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
if ($snaps{'source'}{$snap}{'creation'} >= $bookmarkcreation) {
$nextsnapshot = $snap;
last;
}
}
}
>>>>>>> e186f3c66e9c757fa62c4eaa8a1c05bc49dbcff1
# bookmark stream size can't be determined
my $pvsize = 0;
my $disp_pvsize = "UNKNOWN";
if ($nextsnapshot) {
my $nextsnapshotescaped = escapeshellparam($nextsnapshot);
my $sendcmd = "$sourcesudocmd $zfscmd send -i $sourcefsescaped#$bookmarkescaped $sourcefsescaped\@$nextsnapshotescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs -F $targetfsescaped";
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs#$bookmarkescaped ... $nextsnapshot (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
system("$synccmd") == 0 or do {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
};
$matchingsnap = $nextsnapshot;
$matchingsnapescaped = escapeshellparam($matchingsnap);
} else {
my $sendcmd = "$sourcesudocmd $zfscmd send -i $sourcefsescaped#$bookmarkescaped $sourcefsescaped\@$newsyncsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs -F $targetfsescaped";
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs#$bookmarkescaped ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
system("$synccmd") == 0 or do {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
};
}
}
# do a normal replication if bookmarks aren't used or if previous
# bookmark replication was only done to the next oldest snapshot
if (!$bookmark || $nextsnapshot) {
my $sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefsescaped\@$matchingsnapescaped $sourcefsescaped\@$newsyncsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $receiveextraargs -F $targetfsescaped";
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
system("$synccmd") == 0 or do {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
};
}
# restore original readonly value to target after sync complete # restore original readonly value to target after sync complete
# dyking this functionality out for the time being due to buggy mount/unmount behavior # dyking this functionality out for the time being due to buggy mount/unmount behavior
@ -520,17 +720,29 @@ sub compressargset {
decomrawcmd => '/usr/bin/zstd', decomrawcmd => '/usr/bin/zstd',
decomargs => '-dc', decomargs => '-dc',
}, },
'xz' => {
rawcmd => '/usr/bin/xz',
args => '',
decomrawcmd => '/usr/bin/xz',
decomargs => '-d',
},
'lzo' => { 'lzo' => {
rawcmd => '/usr/bin/lzop', rawcmd => '/usr/bin/lzop',
args => '', args => '',
decomrawcmd => '/usr/bin/lzop', decomrawcmd => '/usr/bin/lzop',
decomargs => '-dfc', decomargs => '-dfc',
}, },
'lz4' => {
rawcmd => '/usr/bin/lz4',
args => '',
decomrawcmd => '/usr/bin/lz4',
decomargs => '-dc',
},
); );
if ($value eq 'default') { if ($value eq 'default') {
$value = $DEFAULT_COMPRESSION; $value = $DEFAULT_COMPRESSION;
} elsif (!(grep $value eq $_, ('gzip', 'pigz-fast', 'pigz-slow', 'zstd-fast', 'zstd-slow', 'lzo', 'default', 'none'))) { } elsif (!(grep $value eq $_, ('gzip', 'pigz-fast', 'pigz-slow', 'zstd-fast', 'zstd-slow', 'lz4', 'xz', 'lzo', 'default', 'none'))) {
warn "Unrecognised compression value $value, defaulting to $DEFAULT_COMPRESSION"; warn "Unrecognised compression value $value, defaulting to $DEFAULT_COMPRESSION";
$value = $DEFAULT_COMPRESSION; $value = $DEFAULT_COMPRESSION;
} }
@ -956,32 +1168,15 @@ sub pruneoldsyncsnaps {
} }
sub getmatchingsnapshot { sub getmatchingsnapshot {
my ($sourcefs, $targetfs, $targetsize, $snaps) = @_; my ($sourcefs, $targetfs, $snaps) = @_;
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) { foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
if (defined $snaps{'target'}{$snap}{'guid'}) { if (defined $snaps{'target'}{$snap}) {
if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) { if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) {
return $snap; return $snap;
} }
} }
} }
# if we got this far, we failed to find a matching snapshot.
if ($exitcode < 2) { $exitcode = 2; }
print "\n";
print "CRITICAL ERROR: Target $targetfs exists but has no snapshots matching with $sourcefs!\n";
print " Replication to target would require destroying existing\n";
print " target. Cowardly refusing to destroy your existing target.\n\n";
# experience tells me we need a mollyguard for people who try to
# zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ...
if ( $targetsize < (64*1024*1024) ) {
print " NOTE: Target $targetfs dataset is < 64MB used - did you mistakenly run\n";
print " \`zfs create $args{'target'}\` on the target? ZFS initial\n";
print " replication must be to a NON EXISTENT DATASET, which will\n";
print " then be CREATED BY the initial replication process.\n\n";
}
return 0; return 0;
} }
@ -1042,7 +1237,7 @@ sub getssh {
$rhost =~ s/:\Q$fs\E$//; $rhost =~ s/:\Q$fs\E$//;
my $remoteuser = $rhost; my $remoteuser = $rhost;
$remoteuser =~ s/\@.*$//; $remoteuser =~ s/\@.*$//;
if ($remoteuser eq 'root') { $isroot = 1; } else { $isroot = 0; } if ($remoteuser eq 'root' || $args{'no-privilege-elevation'}) { $isroot = 1; } else { $isroot = 0; }
# now we need to establish a persistent master SSH connection # now we need to establish a persistent master SSH connection
$socket = "/tmp/syncoid-$remoteuser-$rhost-" . time(); $socket = "/tmp/syncoid-$remoteuser-$rhost-" . time();
open FH, "$sshcmd -M -S $socket -o ControlPersist=1m $args{'sshport'} $rhost exit |"; open FH, "$sshcmd -M -S $socket -o ControlPersist=1m $args{'sshport'} $rhost exit |";
@ -1050,7 +1245,7 @@ sub getssh {
$rhost = "-S $socket $rhost"; $rhost = "-S $socket $rhost";
} else { } else {
my $localuid = $<; my $localuid = $<;
if ($localuid == 0) { $isroot = 1; } else { $isroot = 0; } if ($localuid == 0 || $args{'no-privilege-elevation'}) { $isroot = 1; } else { $isroot = 0; }
} }
# if ($isroot) { print "this user is root.\n"; } else { print "this user is not root.\n"; } # if ($isroot) { print "this user is root.\n"; } else { print "this user is not root.\n"; }
return ($rhost,$fs,$isroot); return ($rhost,$fs,$isroot);
@ -1078,7 +1273,7 @@ sub getsnaps() {
if ($debug) { print "DEBUG: getting list of snapshots on $fs using $getsnapcmd...\n"; } if ($debug) { print "DEBUG: getting list of snapshots on $fs using $getsnapcmd...\n"; }
open FH, $getsnapcmd; open FH, $getsnapcmd;
my @rawsnaps = <FH>; my @rawsnaps = <FH>;
close FH; close FH or die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines # this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command. # as though each were an entirely separate get command.
@ -1110,6 +1305,60 @@ sub getsnaps() {
return %snaps; return %snaps;
} }
sub getbookmarks() {
my ($rhost,$fs,$isroot,%bookmarks) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $error = 0;
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark guid,creation $fsescaped 2>&1 |";
if ($debug) { print "DEBUG: getting list of bookmarks on $fs using $getbookmarkcmd...\n"; }
open FH, $getbookmarkcmd;
my @rawbookmarks = <FH>;
close FH or $error = 1;
if ($error == 1) {
if ($rawbookmarks[0] =~ /invalid type/) {
# no support for zfs bookmarks, return empty hash
return %bookmarks;
}
die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)";
}
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my $lastguid;
foreach my $line (@rawbookmarks) {
# only import bookmark guids, creation from the specified filesystem
if ($line =~ /\Q$fs\E\#.*guid/) {
chomp $line;
$lastguid = $line;
$lastguid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tguid.*$/$1/;
$bookmarks{$lastguid}{'name'}=$bookmark;
} elsif ($line =~ /\Q$fs\E\#.*creation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/;
$bookmarks{$lastguid}{'creation'}=$creation;
}
}
return %bookmarks;
}
sub getsendsize { sub getsendsize {
my ($sourcehost,$snap1,$snap2,$isroot,$receivetoken) = @_; my ($sourcehost,$snap1,$snap2,$isroot,$receivetoken) = @_;
@ -1165,6 +1414,11 @@ sub getsendsize {
} }
chomp $sendsize; chomp $sendsize;
# check for valid value
if ($sendsize !~ /^\d+$/) {
$sendsize = '';
}
# to avoid confusion with a zero size pv, give sendsize # to avoid confusion with a zero size pv, give sendsize
# a minimum 4K value - or if empty, make sure it reads UNKNOWN # a minimum 4K value - or if empty, make sure it reads UNKNOWN
if ($debug) { print "DEBUG: sendsize = $sendsize\n"; } if ($debug) { print "DEBUG: sendsize = $sendsize\n"; }
@ -1229,16 +1483,16 @@ syncoid - ZFS snapshot replication tool
=head1 SYNOPSIS =head1 SYNOPSIS
syncoid [options]... SOURCE TARGET syncoid [options]... SOURCE TARGET
or syncoid [options]... SOURCE [USER@]HOST:TARGET or syncoid [options]... SOURCE USER@HOST:TARGET
or syncoid [options]... [USER@]HOST:SOURCE [TARGET] or syncoid [options]... USER@HOST:SOURCE TARGET
or syncoid [options]... [USER@]HOST:SOURCE [USER@]HOST:TARGET or syncoid [options]... USER@HOST:SOURCE USER@HOST:TARGET
SOURCE Source ZFS dataset. Can be either local or remote SOURCE Source ZFS dataset. Can be either local or remote
TARGET Target ZFS dataset. Can be either local or remote TARGET Target ZFS dataset. Can be either local or remote
Options: Options:
--compress=FORMAT Compresses data during transfer. Currently accepted options are gzip, pigz-fast, pigz-slow, lzo (default) & none --compress=FORMAT Compresses data during transfer. Currently accepted options are gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none
--identifier=EXTRA Extra identifier which is included in the snapshot name. Can be used for replicating to multiple targets. --identifier=EXTRA Extra identifier which is included in the snapshot name. Can be used for replicating to multiple targets.
--recursive|r Also transfers child datasets --recursive|r Also transfers child datasets
--skip-parent Skips syncing of the parent dataset. Does nothing without '--recursive' option. --skip-parent Skips syncing of the parent dataset. Does nothing without '--recursive' option.
@ -1262,3 +1516,7 @@ Options:
--dumpsnaps Dumps a list of snapshots during the run --dumpsnaps Dumps a list of snapshots during the run
--no-command-checks Do not check command existence before attempting transfer. Not recommended --no-command-checks Do not check command existence before attempting transfer. Not recommended
--no-resume Don't use the ZFS resume feature if available --no-resume Don't use the ZFS resume feature if available
--no-clone-handling Don't try to recreate clones on target
--no-privilege-elevation Bypass the root check, for use with ZFS permission delegation
--force-delete Remove target datasets recursively, if there are no matching snapshots/bookmarks

View File

@ -10,7 +10,7 @@ set -x
POOL_NAME="sanoid-test-1" POOL_NAME="sanoid-test-1"
POOL_TARGET="" # root POOL_TARGET="" # root
RESULT="/tmp/sanoid_test_result" RESULT="/tmp/sanoid_test_result"
RESULT_CHECKSUM="aa15e5595b0ed959313289ecb70323dad9903328ac46e881da5c4b0f871dd7cf" RESULT_CHECKSUM="68c67161a59d0e248094a66061972f53613067c9db52ad981030f36bc081fed7"
# UTC timestamp of start and end # UTC timestamp of start and end
START="1483225200" START="1483225200"
@ -46,10 +46,4 @@ done
saveSnapshotList "${POOL_NAME}" "${RESULT}" saveSnapshotList "${POOL_NAME}" "${RESULT}"
# hourly daily monthly # hourly daily monthly
verifySnapshotList "${RESULT}" 8759 366 12 "${RESULT_CHECKSUM}" verifySnapshotList "${RESULT}" 8760 365 12 "${RESULT_CHECKSUM}"
# hourly count should be 8760 but one hour get's lost because of DST
# daily count should be 365 but one additional daily is taken
# because the DST change leads to a day with 25 hours
# which will trigger an additional daily snapshot

View File

@ -0,0 +1,56 @@
#!/bin/bash
# test replication with fallback to bookmarks and all intermediate snapshots
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-1.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-1"
TARGET_CHECKSUM="a23564d5bb8a2babc3ac8936fd82825ad9fff9c82d4924f5924398106bbda9f0 -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@snap1
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
# initial replication
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src@snap1
# create intermediate snapshots
# sleep is needed so creation time can be used for proper sorting
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap2
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap3
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap4
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap5
# replicate which should fallback to bookmarks
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r "${POOL_NAME}" -H -o name)
checksum=$(echo "${output}" | grep -v syncoid_ | sha256sum)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0

View File

@ -0,0 +1,56 @@
#!/bin/bash
# test replication with fallback to bookmarks and all intermediate snapshots
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-2.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-2"
TARGET_CHECKSUM="2460d4d4417793d2c7a5c72cbea4a8a584c0064bf48d8b6daa8ba55076cba66d -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@snap1
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
# initial replication
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src@snap1
# create intermediate snapshots
# sleep is needed so creation time can be used for proper sorting
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap2
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap3
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap4
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap5
# replicate which should fallback to bookmarks
../../../syncoid --no-stream --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r "${POOL_NAME}" -H -o name)
checksum=$(echo "${output}" | sha256sum)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0

View File

@ -0,0 +1,47 @@
#!/bin/bash
# test replication with deletion of target if no matches are found
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-3.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-3"
TARGET_CHECKSUM="0409a2ac216e69971270817189cef7caa91f6306fad9eab1033955b7e7c6bd4c -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs create "${POOL_NAME}"/src/1
zfs create "${POOL_NAME}"/src/2
zfs create "${POOL_NAME}"/src/3
# initial replication
../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src/2@%
zfs snapshot "${POOL_NAME}"/src/2@test
sleep 1
../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r "${POOL_NAME}" -H -o name | sed 's/@syncoid_.*$'/@syncoid_/)
checksum=$(echo "${output}" | sha256sum)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0

27
tests/syncoid/run-tests.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
# run's all the available tests
for test in */; do
if [ ! -x "${test}/run.sh" ]; then
continue
fi
testName="${test%/}"
LOGFILE=/tmp/syncoid_test_run_"${testName}".log
pushd . > /dev/null
echo -n "Running test ${testName} ... "
cd "${test}"
echo | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
echo "[PASS]"
else
echo "[FAILED] (see ${LOGFILE})"
fi
popd > /dev/null
done