This commit is contained in:
Michael Brock 2017-11-08 19:47:02 -06:00
commit c80c442b7e
23 changed files with 664 additions and 552 deletions

View File

@ -1,18 +1,22 @@
1.4.16 merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
1.4.17 changed die to warn when unexpectedly unable to remove a snapshot - this
allows sanoid to continue taking/removing other snapshots not affected by
whatever lock prevented the first from being taken or removed
1.4.16 merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's
update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
@LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor.
1.4.15 merged @hrast01's -o option to pass ssh CLI options through. Currently only supports a single -o=option argument -
in the near future, need to add some simple parsing to expand -o=option1,option2 on the CLI to
in the near future, need to add some simple parsing to expand -o=option1,option2 on the CLI to
-o option1 -o option2 as passed to SSH.
1.4.14 fixed significant regression in syncoid - now pulls creation AND guid on each snap; sorts by
creation and matches by guid. regression reported in #112 by @da-me, thank you!
1.4.13 Syncoid will now continue trying to replicate other child datasets after one dataset fails replication
when called recursively. Eg syncoid -r source/parent target/parent when source/parent/child1 has been
when called recursively. Eg syncoid -r source/parent target/parent when source/parent/child1 has been
deleted and replaced with an imposter will no longer prevent source/parent/child2 from successfully
replicating to target/parent/child2. This could still use some cleanup TBH; syncoid SHOULD exit 3
if any of these errors happen (to assist detection of errors in scripting) but now would exit 0.
@ -24,7 +28,7 @@
and also paves the way in the future for Syncoid to find matching snapshots even after `zfs rename` on source
or target. Thank you Github user @mailinglists35 for the idea!
1.4.10 added --compress=pigz-fast and --compress=pigz-slow. On a Xeon E3-1231v3, pigz-fast is equivalent compression
1.4.10 added --compress=pigz-fast and --compress=pigz-slow. On a Xeon E3-1231v3, pigz-fast is equivalent compression
to --compress=gzip but with compressed throughput of 75.2 MiB/s instead of 18.1 MiB/s. pigz-slow is around 5%
better compression than compress=gzip with roughly equivalent compressed throughput. Note that pigz-fast produces
a whopping 20+% better compression on the test data (a linux boot drive) than lzop does, while still being fast
@ -34,17 +38,17 @@
Default compression remains lzop for SSH transport, with compression automatically set to none if there's no transport
(ie syncoid replication from dataset to dataset on the local machine only).
1.4.9 added -c option to manually specify the SSH cipher used. Must use a cipher supported by both source and target! Thanks
1.4.9 added -c option to manually specify the SSH cipher used. Must use a cipher supported by both source and target! Thanks
Tamas Papp.
1.4.8 added --no-stream argument to syncoid: allows use of -i incrementals (do not replicate a full snapshot stream, only a
1.4.8 added --no-stream argument to syncoid: allows use of -i incrementals (do not replicate a full snapshot stream, only a
direct incremental update from oldest to most recent snapshot) instead of the normal -I incrementals which include
all intermediate snapshots.
added --no-sync-snap, which has syncoid replicate using only the newest PRE-EXISTING snapshot on source,
instead of default behavior in which syncoid creates a new, ephemeral syncoid snapshot.
1.4.7a (syncoid only) added standard invocation output when called without source or target
1.4.7a (syncoid only) added standard invocation output when called without source or target
as per @rriley and @fajarnugraha suggestions
1.4.7 reverted Perl shebangs to #!/usr/bin/perl - sorry FreeBSD folks, shebanged to /usr/bin/env perl bare calls to syncoid
@ -59,7 +63,7 @@
1.4.6c merged @gusson's pull request to add -sshport argument
1.4.6b updated default cipherlist for syncoid to
1.4.6b updated default cipherlist for syncoid to
chacha20-poly1305@openssh.com,arcfour - arcfour isn't supported on
newer SSH (in Ubuntu Xenial and FreeBSD), chacha20 isn't supported on
some older SSH versions (Ubuntu Precise< I think?)
@ -81,17 +85,17 @@
1.4.3 added SSH persistence to syncoid - using socket speeds up SSH overhead 300%! =)
one extra commit to get rid of the "Exit request sent." SSH noise at the end.
1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name
1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name
child snapshots to be deleted - thank you Lenz Weber!
1.4.1 updated check_zpool() in sanoid to parse zpool list properly both pre- and post- ZoL v0.6.4
1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots.
1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots.
use: findoid /path/to/file
1.3.1 whoops - prevent process_children_only from getting set from blank value in defaults
1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with
1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with
empty parent datasets at all. also more thoroughly documented features in default config files.
1.2.0 added monitor_children_only parameter to sanoid.conf for use with recursive definitions - in cases where container dataset is kept empty
@ -111,7 +115,7 @@
1.0.15 updated syncoid to accept compression engine flags - --compress=lzo|gzip|none
1.0.14 updated syncoid to reduce output when fetching snapshot list - thank you github user @0xFate.
1.0.14 updated syncoid to reduce output when fetching snapshot list - thank you github user @0xFate.
1.0.13 removed monitor_version again - sorry for the feature instability, forgot I removed it in the first place because I didn't like pulling
in so many dependencies for such a trivial feature

View File

@ -1,9 +1,9 @@
FreeBSD users will need to change the Perl shebangs at the top of the executables from #!/usr/bin/perl
to #!/usr/local/bin/perl in most cases.
Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly
from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux
systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than
Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly
from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux
systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than
the other way around. =)
If you don't want to have to change the shebangs, your other option is to drop a symlink on your system:

13
INSTALL
View File

@ -1,20 +1,21 @@
SYNCOID
-------
Syncoid depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced
Syncoid depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced
functionality in the absence of any or all of the above. SSH is only required
for remote synchronization. On newer FreeBSD and Ubuntu Xenial
chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the
default for SSH transport since v1.4.6. Syncoid runs will fail if one of them
for remote synchronization. On newer FreeBSD and Ubuntu Xenial
chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the
default for SSH transport since v1.4.6. Syncoid runs will fail if one of them
is not available on either end of the transport.
On Ubuntu: apt install pv lzop mbuffer
On CentOS: yum install lzo pv mbuffer lzop
On FreeBSD: pkg install pv lzop
FreeBSD notes: FreeBSD may place pv and lzop in somewhere other than
/usr/bin ; syncoid currently does not check path.
Simplest path workaround is symlinks, eg:
root@bsd:~# ln -s /usr/local/bin/lzop /usr/bin/lzop
root@bsd:~# ln -s /usr/local/bin/lzop /usr/bin/lzop
or similar, as appropriate, to create links in /usr/bin
to wherever the utilities actually are on your system.
@ -26,5 +27,5 @@ without it. Config::IniFiles may be installed from CPAN, though the project
strongly recommends using your distribution's repositories instead.
On Ubuntu: apt install libconfig-inifiles-perl
On CentOS: yum install perl-Config-IniFiles
On FreeBSD: pkg install p5-Config-Inifiles

View File

@ -672,4 +672,3 @@ may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@ -49,19 +49,19 @@ Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 da
+ --take-snapshots
This will process your sanoid.conf file, create snapshots, but it will NOT purge expired ones. (Note that snapshots taken are atomic in an individual dataset context, <i>not</i> a global context - snapshots of pool/dataset1 and pool/dataset2 will each be internally consistent and atomic, but one may be a few filesystem transactions "newer" than the other.)
+ --prune-snapshots
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
+ --monitor-snapshots
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
+ --monitor-health
This option is designed to be run by a Nagios monitoring system. It reports on the health of the zpool your filesystems are on. It only monitors filesystems that are configured in the sanoid.conf file.
+ --force-update
This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
@ -111,11 +111,11 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
##### Syncoid Command Line Options
+ --[source]
+ [source]
This is the source dataset. It can be either local or remote.
+ --[destination]
+ [destination]
This is the destination dataset. It can be either local or remote.
@ -125,7 +125,7 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
+ --compress <compression type>
Currently accepts gzip and lzo. lzo is fast and light on the processsor and is the default. If the selected compression method is unavailable on the source and destination, no compression will be used.
Currently accepted options: gzip, pigz-fast, pigz-slow, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
+ --source-bwlimit <limit t|g|m|k>
@ -135,7 +135,7 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
This is the bandwidth limit imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limites are desired.
+ --nocommandchecks
+ --no-command-checks
Do not check the existance of commands before attempting the transfer. It assumes all programs are available. This should never be used.
@ -161,11 +161,7 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
+ --quiet
Supress non-error output.
+ --verbose
This prints additional information during the sanoid run.
Supress non-error output.
+ --debug

View File

@ -1 +1 @@
1.4.16
1.4.17

9
debian/changelog vendored Normal file
View File

@ -0,0 +1,9 @@
sanoid (1.4.16) unstable; urgency=medium
* merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
* off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's
* update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
* encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
* @LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor.
-- Jim Salter <github@jrs-s.net> Wed, 9 Aug 2017 12:28:49 -0400

1
debian/compat vendored Normal file
View File

@ -0,0 +1 @@
9

14
debian/control vendored Normal file
View File

@ -0,0 +1,14 @@
Source: sanoid
Section: unknown
Priority: optional
Maintainer: Jim Salter <jim@openoid.net>
Build-Depends: debhelper (>= 9)
Standards-Version: 3.9.8
Homepage: https://github.com/jimsalterjrs/sanoid
Vcs-Git: https://github.com/jimsalterjrs/sanoid.git
Vcs-Browser: https://github.com/jimsalterjrs/sanoid
Package: sanoid
Architecture: all
Depends: ${misc:Depends}, ${perl:Depends}, zfsutils-linux | zfs, libconfig-inifiles-perl
Description: Policy-driven snapshot management and replication tools

33
debian/copyright vendored Normal file
View File

@ -0,0 +1,33 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: sanoid
Source: <https://github.com/jimsalterjrs/sanoid>
Files: *
Copyright: 2017 Jim Salter <github@jrs-s.net>
License: GPL-3.0+
Files: debian/*
Copyright: 2017 Jim Salter <github@jrs-s.net>
License: GPL-3.0+
License: GPL-3.0+
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
.
On Debian systems, the complete text of the GNU General
Public License version 3 can be found in "/usr/share/common-licenses/GPL-3".
# Please also look if there are files or directories which have a
# different copyright/license attached and list them here.
# Please avoid picking licenses with terms that are more restrictive than the
# packaged work, as it may make Debian's contributions unacceptable upstream.

19
debian/rules vendored Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/make -f
# See debhelper(7) for more info
# output every command that modifies files on the build system.
#export DH_VERBOSE = 1
%:
dh $@ --with systemd
DESTDIR = $(CURDIR)/debian/sanoid
override_dh_auto_install:
@mkdir -p $(DESTDIR)/usr/sbin; \
cp sanoid syncoid findoid sleepymutex $(DESTDIR)/usr/sbin;
@mkdir -p $(DESTDIR)/etc/sanoid; \
cp sanoid.defaults.conf $(DESTDIR)/etc/sanoid;
@mkdir -p $(DESTDIR)/usr/share/doc/sanoid; \
cp sanoid.conf $(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example;
@mkdir -p $(DESTDIR)/lib/systemd/system; \
cp debian/sanoid.timer $(DESTDIR)/lib/systemd/system;

1
debian/sanoid.README.Debian vendored Normal file
View File

@ -0,0 +1 @@
To start, copy the example config file in /usr/share/doc/sanoid to /etc/sanoid/sanoid.conf.

1
debian/sanoid.docs vendored Normal file
View File

@ -0,0 +1 @@
README.md

9
debian/sanoid.service vendored Normal file
View File

@ -0,0 +1,9 @@
[Unit]
Description=Snapshot ZFS Pool
Requires=zfs.target
After=zfs.target
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
[Service]
Type=oneshot
ExecStart=/usr/sbin/sanoid --cron

9
debian/sanoid.timer vendored Normal file
View File

@ -0,0 +1,9 @@
[Unit]
Description=Run Sanoid Every 15 Minutes
[Timer]
OnCalendar=*:0/15
Persistent=true
[Install]
WantedBy=timers.target

1
debian/source/format vendored Normal file
View File

@ -0,0 +1 @@
3.0 (native)

16
findoid
View File

@ -1,6 +1,6 @@
#!/usr/bin/perl
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
@ -71,13 +71,13 @@ sub getversions {
$duplicate = 1;
}
}
if (! $duplicate) {
if (! $duplicate) {
$versions{$filename}{'size'} = $size;
$versions{$filename}{'mtime'} = $mtime;
}
}
return %versions;
return %versions;
}
sub findsnaps {
@ -99,19 +99,19 @@ sub findsnaps {
}
sub getdataset {
my ($path) = @_;
open FH, "$zfs list -Ho mountpoint |";
my @datasets = <FH>;
close FH;
my @matchingdatasets;
foreach my $dataset (@datasets) {
chomp $dataset;
if ( $path =~ /^$dataset/ ) { push @matchingdatasets, $dataset; }
}
my $bestmatch = '';
foreach my $dataset (@matchingdatasets) {
if ( length $dataset > length $bestmatch ) { $bestmatch = $dataset; }
@ -150,7 +150,7 @@ sub getargs {
# if this CLI arg takes a user-specified value and
# we don't already have it, then the user must have
# specified with a space, so pull in the next value
# from the array as this value rather than as the
# from the array as this value rather than as the
# next argument.
if ($argvalue eq '') { $argvalue = shift(@args); }
$args{$arg} = $argvalue;
@ -165,7 +165,7 @@ sub getargs {
# if this CLI arg takes a user-specified value and
# we don't already have it, then the user must have
# specified with a space, so pull in the next value
# from the array as this value rather than as the
# from the array as this value rather than as the
# next argument.
if ($argvalue eq '') { $argvalue = shift(@args); }
$args{$arg} = $argvalue;

415
sanoid
View File

@ -4,22 +4,33 @@
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
my $version = '1.4.16';
$::VERSION = '1.4.17';
use strict;
use Config::IniFiles; # read samba-style conf file
use File::Path; # for rmtree command in use_prune
use Data::Dumper; # debugging - print contents of hash
use Time::Local; # to parse dates in reverse
use warnings;
use Config::IniFiles; # read samba-style conf file
use Data::Dumper; # debugging - print contents of hash
use File::Path; # for rmtree command in use_prune
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage; # pod2usage
use Time::Local; # to parse dates in reverse
# parse CLI arguments
my %args = getargs(@ARGV);
my %args = ("configdir" => "/etc/sanoid");
GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet",
"monitor-health", "force-update", "configdir=s",
"monitor-snapshots", "take-snapshots", "prune-snapshots"
) or pod2usage(2);
# If only config directory (or nothing) has been specified, default to --cron --verbose
if (keys %args < 2) {
$args{'cron'} = 1;
$args{'verbose'} = 1;
}
my $pscmd = '/bin/ps';
my $zfs = '/sbin/zfs';
if ($args{'configdir'} eq '') { $args{'configdir'} = '/etc/sanoid'; }
my $conf_file = "$args{'configdir'}/sanoid.conf";
my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
@ -41,12 +52,10 @@ my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath );
if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); }
if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); }
if ($args{'monitor-health'}) { monitor_health(@params); }
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
if ($args{'version'}) { print "INFO: Sanoid version: $version\n"; }
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
if ($args{'cron'} || $args{'noargs'}) {
if ($args{'noargs'}) { print "INFO: No arguments given - assuming --cron and --verbose.\n"; }
if (!$args{'quiet'}) { $args{'verbose'} = 1; }
if ($args{'cron'}) {
if ($args{'quiet'}) { $args{'verbose'} = 0; }
take_snapshots (@params);
prune_snapshots (@params);
} else {
@ -55,13 +64,13 @@ if ($args{'cron'} || $args{'noargs'}) {
}
exit 0;
####################################################################################
####################################################################################
####################################################################################
sub monitor_health() {
sub monitor_health {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %pools;
my @messages;
@ -84,16 +93,16 @@ sub monitor_health() {
print "$message\n";
exit $errlevel;
} # end monitor_health()
}
####################################################################################
####################################################################################
####################################################################################
sub monitor_snapshots() {
sub monitor_snapshots {
# nagios plugin format: exit 0,1,2,3 for OK, WARN, CRITICAL, or ERROR.
# check_snapshot_date - test ZFS fs creation timestamp for recentness
# accepts arguments: $filesystem, $warn (in seconds elapsed), $crit (in seconds elapsed)
@ -107,14 +116,14 @@ sub monitor_snapshots() {
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
if (! $config{$section}{'monitor'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
my $path = $config{$section}{'path'};
push @paths, $path;
my @types = ('yearly','monthly','daily','hourly');
foreach my $type (@types) {
my $smallerperiod = 0;
# we need to set the period length in seconds first
if ($type eq 'hourly') { $smallerperiod = 60; }
@ -127,11 +136,13 @@ sub monitor_snapshots() {
my $warn = $config{$section}{$typewarn} * $smallerperiod;
my $crit = $config{$section}{$typecrit} * $smallerperiod;
my $elapsed = -1;
if (defined $snapsbytype{$path}{$type}{'newest'}) { $elapsed = $snapsbytype{$path}{$type}{'newest'}; }
my $dispelapsed = displaytime($snapsbytype{$path}{$type}{'newest'});
if (defined $snapsbytype{$path}{$type}{'newest'}) {
$elapsed = $snapsbytype{$path}{$type}{'newest'};
}
my $dispelapsed = displaytime($elapsed);
my $dispwarn = displaytime($warn);
my $dispcrit = displaytime($crit);
if ( $elapsed > $crit || $elapsed == -1) {
if ( $elapsed > $crit || $elapsed == -1) {
if ($config{$section}{$typecrit} > 0) {
if (! $config{$section}{'monitor_dont_crit'}) { $errorlevel = 2; }
if ($elapsed == -1) {
@ -148,20 +159,20 @@ sub monitor_snapshots() {
} else {
# push @msgs .= "OK: $path\'s newest $type snapshot is $dispelapsed old \n";
}
}
}
my @sorted_msgs = sort { lc($a) cmp lc($b) } @msgs;
my @sorted_paths = sort { lc($a) cmp lc($b) } @paths;
$msg = join (", ", @sorted_msgs);
my $paths = join (", ", @sorted_paths);
if ($msg eq '') { $msg = "OK: all monitored datasets \($paths\) have fresh snapshots"; }
print "$msg\n";
exit $errorlevel;
} # end monitor()
}
####################################################################################
####################################################################################
@ -193,45 +204,48 @@ sub prune_snapshots {
elsif ($type eq 'daily') { $period = 60*60*24; }
elsif ($type eq 'monthly') { $period = 60*60*24*31; }
elsif ($type eq 'yearly') { $period = 60*60*24*365.25; }
my @sorted = split (/\|/,$snapsbytype{$path}{$type}{'sorted'});
# if we say "daily=30" we really mean "don't keep any dailies more than 30 days old", etc
my $maxage = ( time() - $config{$section}{$type} * $period );
# but if we say "daily=30" we ALSO mean "don't get rid of ANY dailies unless we have more than 30".
my $minsnapsthistype = $config{$section}{$type};
# avoid pissing off use warnings by not executing this block if no matching snaps exist
if (defined $snapsbytype{$path}{$type}{'sorted'}) {
my @sorted = split (/\|/,$snapsbytype{$path}{$type}{'sorted'});
# how many total snaps of this type do we currently have?
my $numsnapsthistype = scalar (@sorted);
# if we say "daily=30" we really mean "don't keep any dailies more than 30 days old", etc
my $maxage = ( time() - $config{$section}{$type} * $period );
# but if we say "daily=30" we ALSO mean "don't get rid of ANY dailies unless we have more than 30".
my $minsnapsthistype = $config{$section}{$type};
my @prunesnaps;
foreach my $snap( @sorted ){
# print "snap $path\@$snap has age $snaps{$path}{$snap}{'ctime'}, maxage is $maxage.\n";
if ( ($snaps{$path}{$snap}{'ctime'} < $maxage) && ($numsnapsthistype > $minsnapsthistype) ) {
my $fullpath = $path . '@' . $snap;
push(@prunesnaps,$fullpath);
# we just got rid of a snap, so we now have one fewer, duh
$numsnapsthistype--;
}
}
# how many total snaps of this type do we currently have?
my $numsnapsthistype = scalar (@sorted);
if ((scalar @prunesnaps) > 0) {
# print "found some snaps to prune!\n"
if (checklock('sanoid_pruning')) {
writelock('sanoid_pruning');
foreach my $snap( @prunesnaps ){
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (iszfsbusy($path)) {
print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n";
} else {
if (! $args{'readonly'}) { system($zfs, "destroy",$snap) == 0 or die "could not remove $snap : $?"; }
}
my @prunesnaps;
foreach my $snap( @sorted ){
# print "snap $path\@$snap has age $snaps{$path}{$snap}{'ctime'}, maxage is $maxage.\n";
if ( ($snaps{$path}{$snap}{'ctime'} < $maxage) && ($numsnapsthistype > $minsnapsthistype) ) {
my $fullpath = $path . '@' . $snap;
push(@prunesnaps,$fullpath);
# we just got rid of a snap, so we now have one fewer, duh
$numsnapsthistype--;
}
}
if ((scalar @prunesnaps) > 0) {
# print "found some snaps to prune!\n"
if (checklock('sanoid_pruning')) {
writelock('sanoid_pruning');
foreach my $snap( @prunesnaps ){
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (iszfsbusy($path)) {
print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n";
} else {
if (! $args{'readonly'}) { system($zfs, "destroy",$snap) == 0 or warn "could not remove $snap : $?"; }
}
}
removelock('sanoid_pruning');
$forcecacheupdate = 1;
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate);
} else {
print "INFO: deferring snapshot pruning - valid pruning lock held by other sanoid process.\n";
}
removelock('sanoid_pruning');
$forcecacheupdate = 1;
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate);
} else {
print "INFO: deferring snapshot pruning - valid pruning lock held by other sanoid process.\n";
}
}
}
@ -258,7 +272,7 @@ sub take_snapshots {
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
if (! $config{$section}{'autosnap'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
my $path = $config{$section}{'path'};
@ -270,14 +284,14 @@ sub take_snapshots {
if (defined $snapsbytype{$path}{$type}{'newest'}) {
$newestage = $snapsbytype{$path}{$type}{'newest'};
} else{
$newestage = 9999999999999999;
$newestage = 9999999999999999;
}
# for use with localtime: @preferredtime will be most recent preferred snapshot time in ($sec,$min,$hour,$mon-1,$year) format
my @preferredtime;
my $lastpreferred;
if ($type eq 'hourly') {
if ($type eq 'hourly') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'hourly_min'};
push @preferredtime,$datestamp{'hour'};
@ -286,7 +300,7 @@ sub take_snapshots {
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60*60; } # preferred time is later this hour - so look at last hour's
} elsif ($type eq 'daily') {
} elsif ($type eq 'daily') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'daily_min'};
push @preferredtime,$config{$section}{'daily_hour'};
@ -295,7 +309,7 @@ sub take_snapshots {
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24; } # preferred time is later today - so look at yesterday's
} elsif ($type eq 'monthly') {
} elsif ($type eq 'monthly') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'monthly_min'};
push @preferredtime,$config{$section}{'monthly_hour'};
@ -304,7 +318,7 @@ sub take_snapshots {
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31; } # preferred time is later this month - so look at last month's
} elsif ($type eq 'yearly') {
} elsif ($type eq 'yearly') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'yearly_min'};
push @preferredtime,$config{$section}{'yearly_hour'};
@ -317,7 +331,7 @@ sub take_snapshots {
# reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot
my $maxage = time()-$lastpreferred;
if ( $newestage > $maxage ) {
# update to most current possible datestamp
%datestamp = get_date();
@ -331,9 +345,9 @@ sub take_snapshots {
if ( (scalar(@newsnaps)) > 0) {
foreach my $snap ( @newsnaps ) {
if ($args{'verbose'}) { print "taking snapshot $snap\n"; }
if (!$args{'readonly'}) {
if (!$args{'readonly'}) {
system($zfs, "snapshot", "$snap") == 0
or die "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
# make sure we don't end up with multiple snapshots with the same ctime
sleep 1;
}
@ -360,9 +374,9 @@ sub blabber {
#print Dumper(\%snapsbytype);
#print "****** SNAPSBYPATH ******\n";
#print Dumper(\%snapsbypath);
print "\n";
foreach my $section (keys %config) {
my $path = $config{$section}{'path'};
print "Filesystem $path has:\n";
@ -370,7 +384,7 @@ sub blabber {
print "(newest: ";
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60);
print "$newest hours old)\n";
foreach my $type (keys %{ $snapsbytype{$path} }){
print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n";
print " desired: $config{$section}{$type}\n";
@ -380,7 +394,7 @@ sub blabber {
}
print "\n\n";
}
} # end blabber
@ -390,7 +404,7 @@ sub blabber {
sub getsnapsbytype {
my ($config, $snaps) = @_;
my %snapsbytype;
@ -407,7 +421,7 @@ sub getsnapsbytype {
# iterate through snapshots of each type, ordered by creation time of each snapshot within that type
foreach my $type (keys %rawsnaps) {
$snapsbytype{$path}{$type}{'numsnaps'} = scalar (keys %{ $rawsnaps{$type} });
my @sortedsnaps;
my @sortedsnaps;
foreach my $name (
sort { $rawsnaps{$type}{$a} <=> $rawsnaps{$type}{$b} } keys %{ $rawsnaps{$type} }
) {
@ -420,7 +434,7 @@ sub getsnapsbytype {
}
return %snapsbytype;
} # end getsnapsbytype
@ -430,22 +444,22 @@ sub getsnapsbytype {
sub getsnapsbypath {
my ($config,$snaps) = @_;
my %snapsbypath;
# iterate through each module section - each section is a single ZFS path
foreach my $section (keys %config) {
my $path = $config{$section}{'path'};
$snapsbypath{$path}{'numsnaps'} = scalar (keys %{ $snaps{$path} });
# iterate through snapshots of each type, ordered by creation time of each snapshot within that type
my %rawsnaps;
foreach my $snapname ( keys %{ $snaps{$path} } ) {
$rawsnaps{$path}{$snapname} = $snaps{$path}{$snapname}{'ctime'};
}
my @sortedsnaps;
foreach my $snapname (
foreach my $snapname (
sort { $rawsnaps{$path}{$a} <=> $rawsnaps{$path}{$b} } keys %{ $rawsnaps{$path} }
) {
push @sortedsnaps, $snapname;
@ -454,9 +468,9 @@ sub getsnapsbypath {
my $sortedsnaps = join ('|',@sortedsnaps);
$snapsbypath{$path}{'sorted'} = $sortedsnaps;
}
return %snapsbypath;
} # end getsnapsbypath
@ -472,25 +486,23 @@ sub getsnaps {
my $cache = '/var/cache/sanoidsnapshots.txt';
my @rawsnaps;
my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
$atime,$mtime,$ctime,$blksize,$blocks)
= stat($cache);
if ( $forcecacheupdate || (time() - $mtime) > $cacheTTL ) {
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
if ( $forcecacheupdate || ! -f $cache || (time() - $mtime) > $cacheTTL ) {
if (checklock('sanoid_cacheupdate')) {
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
if ($args{'verbose'}) {
if ($args{'force-update'}) {
print "INFO: cache forcibly expired - updating from zfs list.\n";
} else {
print "INFO: cache expired - updating from zfs list.\n";
print "INFO: cache expired - updating from zfs list.\n";
}
}
open FH, "$zfs get -Hpt snapshot creation |";
@rawsnaps = <FH>;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
print FH @rawsnaps;
close FH;
@ -510,10 +522,14 @@ sub getsnaps {
foreach my $snap (@rawsnaps) {
my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*ly)\s*creation\s*(\d*)/);
my ($snaptype) = ($snapname =~ m/.*_(\w*ly)/);
if ($snapname =~ /^autosnap/) {
$snaps{$fs}{$snapname}{'ctime'}=$snapdate;
$snaps{$fs}{$snapname}{'type'}=$snaptype;
# avoid pissing off use warnings
if (defined $snapname) {
my ($snaptype) = ($snapname =~ m/.*_(\w*ly)/);
if ($snapname =~ /^autosnap/) {
$snaps{$fs}{$snapname}{'ctime'}=$snapdate;
$snaps{$fs}{$snapname}{'type'}=$snaptype;
}
}
}
@ -538,7 +554,7 @@ sub init {
my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only');
my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON");
my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF");
foreach my $section (keys %ini) {
# first up - die with honor if unknown parameters are set in any modules or templates by the user.
@ -549,7 +565,7 @@ sub init {
}
if ($section =~ /^template_/) { next; } # don't process templates directly
# only set defaults on sections that haven't already been initialized - this allows us to override values
# for sections directly when they've already been defined recursively, without starting them over from scratch.
if (! defined ($config{$section}{'initialized'})) {
@ -557,17 +573,17 @@ sub init {
# set default values from %defaults, which can then be overriden by template
# and/or local settings within the module.
foreach my $key (keys %{$defaults{'template_default'}}) {
if (! ($key =~ /template|recursive|children_only/)) {
$config{$section}{$key} = $defaults{'template_default'}{$key};
if (! ($key =~ /template|recursive|children_only/)) {
$config{$section}{$key} = $defaults{'template_default'}{$key};
}
}
# override with values from user-defined default template, if any
foreach my $key (keys %{$ini{'template_default'}}) {
if (! ($key =~ /template|recursive/)) {
if (! ($key =~ /template|recursive/)) {
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; }
$config{$section}{$key} = $ini{'template_default'}{$key};
$config{$section}{$key} = $ini{'template_default'}{$key};
}
}
}
@ -582,9 +598,9 @@ sub init {
my $template = 'template_'.$rawtemplate;
foreach my $key (keys %{$ini{$template}}) {
if (! ($key =~ /template|recursive/)) {
if (! ($key =~ /template|recursive/)) {
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
$config{$section}{$key} = $ini{$template}{$key};
$config{$section}{$key} = $ini{$template}{$key};
}
}
}
@ -597,19 +613,19 @@ sub init {
$config{$section}{$key} = $ini{$section}{$key};
}
}
# make sure that true values are true and false values are false for any toggled values
foreach my $toggle(@toggles) {
foreach my $true (@istrue) {
if ($config{$section}{$toggle} eq $true) { $config{$section}{$toggle} = 1; }
foreach my $true (@istrue) {
if (defined $config{$section}{$toggle} && $config{$section}{$toggle} eq $true) { $config{$section}{$toggle} = 1; }
}
foreach my $false (@isfalse) {
if ($config{$section}{$toggle} eq $false) { $config{$section}{$toggle} = 0; }
if (defined $config{$section}{$toggle} && $config{$section}{$toggle} eq $false) { $config{$section}{$toggle} = 0; }
}
}
# section path is the section name, unless section path has been explicitly defined
if (defined ($ini{$section}{'path'})) {
# section path is the section name, unless section path has been explicitly defined
if (defined ($ini{$section}{'path'})) {
$config{$section}{'path'} = $ini{$section}{'path'};
} else {
$config{$section}{'path'} = $section;
@ -665,7 +681,7 @@ sub get_date {
sub displaytime {
# take a time in seconds, return it in human readable form
my $elapsed = shift;
my ($elapsed) = @_;
my $days = int ($elapsed / 60 / 60 / 24);
$elapsed -= $days * 60 * 60 * 24;
@ -690,7 +706,7 @@ sub displaytime {
sub check_zpool() {
# check_zfs Nagios plugin for monitoring Sun ZFS zpools
# Copyright (c) 2007
# Copyright (c) 2007
# original Written by Nathan Butcher
# adapted for use within Sanoid framework by Jim Salter (2014)
#
@ -709,13 +725,13 @@ sub check_zpool() {
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# Version: 0.9.2
# Date : 24th July 2007
# This plugin has tested on FreeBSD 7.0-CURRENT and Solaris 10
# With a bit of fondling, it could be expanded to recognize other OSes in
# future (e.g. if FUSE Linux gets off the ground)
# Verbose levels:-
# 1 - Only alert us of zpool health and size stats
# 2 - ...also alert us of failed devices when things go bad
@ -725,14 +741,14 @@ sub check_zpool() {
# Example: check_zfs zeepool 1
# ZPOOL zeedata : ONLINE {Size:3.97G Used:183K Avail:3.97G Cap:0%}
my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2);
my $state="UNKNOWN";
my $msg="FAILURE";
my $pool=shift;
my $verbose=shift;
my $size="";
my $used="";
my $avail="";
@ -740,14 +756,14 @@ sub check_zpool() {
my $health="";
my $dmge="";
my $dedup="";
if ($verbose < 1 || $verbose > 3) {
print "Verbose levels range from 1-3\n";
exit $ERRORS{$state};
}
my $statcommand="/sbin/zpool list -o name,size,cap,health,free $pool";
if (! open STAT, "$statcommand|") {
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
exit $ERRORS{$state};
@ -756,7 +772,7 @@ sub check_zpool() {
# chuck the header line
my $header = <STAT>;
# find and parse the line with values for the pool
# find and parse the line with values for the pool
while(<STAT>) {
chomp;
if (/^${pool}\s+/) {
@ -765,12 +781,12 @@ sub check_zpool() {
($name, $size, $cap, $health, $avail) = @row;
}
}
# Tony: Debuging
# print "Size: $size \t Used: $used \t Avai: $avail \t Cap: $cap \t Health: $health\n";
close(STAT);
## check for valid zpool list response from zpool
if (! $health ) {
$state = "CRITICAL";
@ -778,7 +794,7 @@ sub check_zpool() {
print $state, " ", $msg;
exit ($ERRORS{$state});
}
## determine health of zpool and subsequent error status
if ($health eq "ONLINE" ) {
$state = "OK";
@ -789,39 +805,39 @@ sub check_zpool() {
$state = "CRITICAL";
}
}
## get more detail on possible device failure
## flag to detect section of zpool status involving our zpool
my $poolfind=0;
$statcommand="/sbin/zpool status $pool";
if (! open STAT, "$statcommand|") {
$state = 'CRITICAL';
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
exit $ERRORS{$state};
}
## go through zfs status output to find zpool fses and devices
while(<STAT>) {
chomp;
if (/^\s${pool}/ && $poolfind==1) {
$poolfind=2;
next;
} elsif ( $poolfind==1 ) {
$poolfind=0;
}
if (/NAME\s+STATE\s+READ\s+WRITE\s+CKSUM/) {
$poolfind=1;
}
if ( /^$/ ) {
$poolfind=0;
}
if ($poolfind == 2) {
## special cases pertaining to full verbose
if (/^\sspares/) {
next unless $verbose == 3;
@ -839,33 +855,33 @@ sub check_zpool() {
my $perc;
my ($sta) = /^\s+\S+\s+(\S+)/;
if (/%/) {
($perc) = /([0-9]+%)/;
($perc) = /([0-9]+%)/;
} else {
$perc = "working";
}
$dmge=$dmge . "[REPLACING:${sta} (${perc})]:- ";
next;
}
## other cases
my ($dev, $sta) = /^\s+(\S+)\s+(\S+)/;
## pool online, not degraded thanks to dead/corrupted disk
if ($state eq "OK" && $sta eq "UNAVAIL") {
$state="WARNING";
## switching to verbose level 2 to explain weirdness
if ($verbose == 1) {
$verbose =2;
}
}
## no display for verbose level 1
next if ($verbose==1);
## don't display working devices for verbose level 2
next if ($verbose==2 && $state eq "OK");
next if ($verbose==2 && ($sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE"));
## show everything else
if (/^\s{3}(\S+)/) {
$dmge=$dmge . "<" . $dev . ":" . $sta . "> ";
@ -876,9 +892,9 @@ sub check_zpool() {
}
}
}
## calling all goats!
$msg = sprintf "ZPOOL %s : %s {Size:%s Free:%s Cap:%s} %s\n", $pool, $health, $size, $avail, $cap, $dmge;
$msg = "$state $msg";
return ($ERRORS{$state},$msg);
@ -891,7 +907,7 @@ sub check_zpool() {
######################################################################################################
sub checklock {
# take argument $lockname.
# take argument $lockname.
#
# read /var/run/$lockname.lock for a pid on first line and a mutex on second line.
#
@ -905,19 +921,19 @@ sub checklock {
#
# shorthand - any true return indicates we are clear to lock; a false return indicates
# that somebody else already has the lock and therefore we cannot.
#
#
my $lockname = shift;
my $lockfile = "/var/run/$lockname.lock";
if (! -e $lockfile) {
# no lockfile
return 1;
}
# lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if
# lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if
# there's still a process running with that pid and with the same mutex.
open FH, "< $lockfile";
my @lock = <FH>;
close FH;
@ -984,14 +1000,14 @@ sub writelock {
}
my $pid = $$;
open PL, "$pscmd -p $$ -o args= |";
my @processlist = <PL>;
close PL;
my $mutex = pop(@processlist);
chomp $mutex;
open FH, "> $lockfile";
print FH "$pid\n";
print FH "$mutex\n";
@ -1026,75 +1042,10 @@ sub iszfsbusy {
#######################################################################################################################3
#######################################################################################################################3
sub getargs {
my @args = @_;
my %args;
my @validargs;
my @novalueargs;
my %validargs;
my %novalueargs;
push my @validargs, 'verbose','debug','version','monitor-health','monitor-snapshots','force-update','cron','take-snapshots','prune-snapshots','readonly','configdir','quiet';
push my @novalueargs, 'verbose','debug','version','monitor-health','monitor-snapshots','force-update','cron','take-snapshots','prune-snapshots','readonly','quiet';
foreach my $item (@validargs) { $validargs{$item}=1; }
foreach my $item (@novalueargs) { $novalueargs{$item}=1; }
if (! (scalar @args)) {
$args{'noargs'} = 1;
}
while (my $rawarg = shift(@args)) {
my $argvalue;
my $arg = $rawarg;
if ($rawarg =~ /=/) {
# user specified the value for a CLI argument with =
# instead of with blank space. separate appropriately.
$argvalue = $arg;
$arg =~ s/=.*$//;
$argvalue =~ s/^.*=//;
}
if ($rawarg =~ /^--/) {
# doubledash arg
$arg =~ s/^--//;
if ($novalueargs{$arg}) {
$args{$arg} = 1;
} else {
# if this CLI arg takes a user-specified value and
# we don't already have it, then the user must have
# specified with a space, so pull in the next value
# from the array as this value rather than as the
# next argument.
if ($argvalue eq '') { $argvalue = shift(@args); }
$args{$arg} = $argvalue;
}
} elsif ($rawarg =~ /^-/) {
# singledash arg
$arg =~ s/^-//;
if ($novalueargs{$arg}) {
$args{$arg} = 1;
} else {
# if this CLI arg takes a user-specified value and
# we don't already have it, then the user must have
# specified with a space, so pull in the next value
# from the array as this value rather than as the
# next argument.
if ($argvalue eq '') { $argvalue = shift(@args); }
$args{$arg} = $argvalue;
}
} else {
# bare arg
die "ERROR: don't know what to do with bare argument $rawarg.\n";
}
if (! ($validargs{$arg})) { die "ERROR: don't understand argument $rawarg.\n"; }
}
return %args;
}
sub getchilddatasets {
# for later, if we make sanoid itself support sudo use
my $fs = shift;
my $mysudocmd;
my $mysudocmd = '';
my $getchildrencmd = "$mysudocmd $zfs list -o name -Hr $fs |";
if ($args{'debug'}) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; }
@ -1105,3 +1056,33 @@ sub getchilddatasets {
return @children;
}
__END__
=head1 NAME
sanoid - ZFS snapshot management and replication tool
=head1 SYNOPSIS
sanoid [options]
Assumes --cron --verbose if no other arguments (other than configdir) are specified
Options:
--configdir=DIR Specify a directory to find config file sanoid.conf
--cron Creates snapshots and purges expired snapshots
--verbose Prints out additional information during a sanoid run
--readonly Simulates creation/deletion of snapshots
--quiet Suppresses non-error output
--force-update Clears out sanoid's zfs snapshot cache
--monitor-health Reports on zpool "health", in a Nagios compatible format
--monitor-snapshots Reports on snapshot "health", in a Nagios compatible format
--take-snapshots Creates snapshots as specified in sanoid.conf
--prune-snapshots Purges expired snapshots as specified in sanoid.conf
--help Prints this helptext
--version Prints the version number
--debug Prints out a lot of additional information during a sanoid run

View File

@ -54,13 +54,13 @@
monthly = 12
yearly = 0
### don't take new snapshots - snapshots on backup
### don't take new snapshots - snapshots on backup
### datasets are replicated in from source, not
### generated locally
autosnap = no
### monitor hourlies and dailies, but don't warn or
### crit until they're over 48h old, since replication
### monitor hourlies and dailies, but don't warn or
### crit until they're over 48h old, since replication
### is typically daily only
hourly_warn = 2880
hourly_crit = 3600

View File

@ -9,7 +9,7 @@
[template_default]
# these settings don't make sense in a template, but we use the defaults file
# as our list of allowable settings also, so they need to be present here even if
# as our list of allowable settings also, so they need to be present here even if
# unset.
path =
recursive =
@ -31,24 +31,24 @@ min_percent_free = 10
# We will automatically take snapshots if autosnap is on, at the desired times configured
# below (or immediately, if we don't have one since the last preferred time for that type).
#
# Note that we will not take snapshots for a given type if that type is set to 0 above,
# Note that we will not take snapshots for a given type if that type is set to 0 above,
# regardless of the autosnap setting - for example, if yearly=0 we will not take yearlies
# even if we've defined a preferred time for yearlies and autosnap is on.
autosnap = 1;
autosnap = 1
# hourly - top of the hour
hourly_min = 0;
hourly_min = 0
# daily - at 23:59 (most people expect a daily to contain everything done DURING that day)
daily_hour = 23;
daily_min = 59;
daily_hour = 23
daily_min = 59
# monthly - immediately at the beginning of the month (ie 00:00 of day 1)
monthly_mday = 1;
monthly_hour = 0;
monthly_min = 0;
monthly_mday = 1
monthly_hour = 0
monthly_min = 0
# yearly - immediately at the beginning of the year (ie 00:00 on Jan 1)
yearly_mon = 1;
yearly_mday = 1;
yearly_hour = 0;
yearly_min = 0;
yearly_mon = 1
yearly_mday = 1
yearly_hour = 0
yearly_min = 0
# monitoring plugin - define warn / crit levels for each snapshot type by age, in units of one period down
# example hourly_warn = 90 means issue WARNING if most recent hourly snapshot is not less than 90 minutes old,
@ -63,12 +63,10 @@ monitor = yes
monitor_dont_warn = no
monitor_dont_crit = no
hourly_warn = 90
hourly_crit = 360
hourly_crit = 360
daily_warn = 28
daily_crit = 32
monthly_warn = 32
monthly_crit = 35
yearly_warn = 0
yearly_crit = 0

View File

@ -1,17 +1,25 @@
%global version 1.4.13
%global version 1.4.14
%global git_tag v%{version}
Name: sanoid
Version: %{version}
Release: 1%{?dist}
BuildArch: noarch
Summary: A policy-driven snapshot management tool for ZFS file systems
Group: Applications/System
License: GPLv3
URL: https://github.com/jimsalterjrs/sanoid
Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz
#BuildRequires:
Requires: perl, mbuffer, lzop, pv
# Enable with systemctl "enable sanoid.timer"
%global _with_systemd 1
Name: sanoid
Version: %{version}
Release: 2%{?dist}
BuildArch: noarch
Summary: A policy-driven snapshot management tool for ZFS file systems
Group: Applications/System
License: GPLv3
URL: https://github.com/jimsalterjrs/sanoid
Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz
Requires: perl, mbuffer, lzop, pv
%if 0%{?_with_systemd}
Requires: systemd >= 212
BuildRequires: systemd
%endif
%description
Sanoid is a policy-driven snapshot management
@ -24,20 +32,62 @@ human-readable TOML configuration file.
%setup -q
%build
echo "Nothing to build"
%install
%{__install} -D -m 0644 sanoid.defaults.conf %{buildroot}/etc/sanoid/sanoid.defaults.conf
%{__install} -d %{buildroot}%{_sbindir}
%{__install} -m 0755 sanoid syncoid findoid sleepymutex %{buildroot}%{_sbindir}
%if 0%{?_with_systemd}
%{__install} -d %{buildroot}%{_unitdir}
%endif
%if 0%{?fedora}
%{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}/examples/sanoid.conf
echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}/examples/sanoid.cron
%endif
%if 0%{?rhel}
%{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.conf
%endif
%if 0%{?_with_systemd}
cat > %{buildroot}%{_unitdir}/%{name}.service <<EOF
[Unit]
Description=Snapshot ZFS Pool
Requires=zfs.target
After=zfs.target
[Service]
Type=oneshot
ExecStart=%{_sbindir}/sanoid --cron
EOF
cat > %{buildroot}%{_unitdir}/%{name}.timer <<EOF
[Unit]
Description=Run Sanoid Every Minute
[Timer]
OnCalendar=*:0/1
Persistent=true
[Install]
WantedBy=timers.target
EOF
%else
%if 0%{?fedora}
%{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}/examples/sanoid.conf
%endif
%if 0%{?rhel}
echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.cron
%endif
%endif
%post
%{?_with_systemd:%{_bindir}/systemctl daemon-reload}
%postun
%{?_with_systemd:%{_bindir}/systemctl daemon-reload}
%files
%doc CHANGELIST VERSION README.md FREEBSD.readme
@ -54,8 +104,16 @@ echo "* * * * * root %{_sbindir}/sanoid --cron" > %{buildroot}%{_docdir}/%{name}
%if 0%{?rhel}
%{_docdir}/%{name}-%{version}
%endif
%if 0%{?_with_systemd}
%{_unitdir}/%{name}.service
%{_unitdir}/%{name}.timer
%endif
%changelog
* Thu Aug 31 2017 Dominic Robinson <github@dcrdev.com> - 1.4.14-2
- Add systemd timers
* Wed Aug 30 2017 Dominic Robinson <github@dcrdev.com> - 1.4.14-1
- Version bump
* Wed Jul 12 2017 Thomas M. Lapp <tmlapp@gmail.com> - 1.4.13-1
- Version bump
- Include FREEBSD.readme in docs

View File

@ -2,8 +2,8 @@
# this is just a cheap way to trigger mutex-based checks for process activity.
#
# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around
# as long as necessary that will show up to any routine that actively does
# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around
# as long as necessary that will show up to any routine that actively does
# something like "ps axo | grep 'zfs receive'" or whatever.
sleep 99999

488
syncoid
View File

@ -1,28 +1,46 @@
#!/usr/bin/perl
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
my $version = '1.4.16';
$::VERSION = '1.4.16';
use strict;
use warnings;
use Data::Dumper;
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage;
use Time::Local;
use Sys::Hostname;
my %args = getargs(@ARGV);
# Blank defaults to use ssh client's default
# TODO: Merge into a single "sshflags" option?
my %args = ('sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => '');
GetOptions(\%args, "no-command-checks", "monitor-version", "compress=s", "dumpsnaps", "recursive|r",
"source-bwlimit=s", "target-bwlimit=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@",
"debug", "quiet", "no-stream", "no-sync-snap") or pod2usage(2);
if ($args{'version'}) {
print "Syncoid version: $version\n";
exit 0;
my %compressargs = %{compressargset($args{'compress'} || 'default')}; # Can't be done with GetOptions arg, as default still needs to be set
# TODO Expand to accept multiple sources?
if (scalar(@ARGV) != 2) {
print("Source or target not found!\n");
pod2usage(2);
exit 127;
} else {
$args{'source'} = $ARGV[0];
$args{'target'} = $ARGV[1];
}
if (!(defined $args{'source'} && defined $args{'target'})) {
print 'usage: syncoid [src_user@src_host:]src_pool/src_dataset [dst_user@dst_host:]dst_pool/dst_dataset'."\n";
exit 127;
# Could possibly merge these into an options function
if (length $args{'source-bwlimit'}) {
$args{'source-bwlimit'} = "-R $args{'source-bwlimit'}";
}
if (length $args{'target-bwlimit'}) {
$args{'target-bwlimit'} = "-r $args{'target-bwlimit'}";
}
$args{'streamarg'} = (defined $args{'no-stream'} ? '-i' : '-I');
my $rawsourcefs = $args{'source'};
my $rawtargetfs = $args{'target'};
@ -32,25 +50,7 @@ my $quiet = $args{'quiet'};
my $zfscmd = '/sbin/zfs';
my $sshcmd = '/usr/bin/ssh';
my $pscmd = '/bin/ps';
my $sshcipher;
if (defined $args{'c'}) {
$sshcipher = "-c $args{'c'}";
} else {
$sshcipher = '-c chacha20-poly1305@openssh.com,arcfour';
}
my $sshport = '-p 22';
my $sshoption;
if (defined $args{'o'}) {
my @options = split(',', $args{'o'});
foreach my $option (@options) {
$sshoption .= " -o $option";
if ($option eq "NoneSwitch=yes") {
$sshcipher = "";
}
}
} else {
$sshoption = "";
}
my $pvcmd = '/usr/bin/pv';
my $mbuffercmd = '/usr/bin/mbuffer';
my $sudocmd = '/usr/bin/sudo';
@ -59,23 +59,25 @@ my $mbufferoptions = '-q -s 128k -m 16M 2>/dev/null';
# being present on remote machines.
my $lscmd = '/bin/ls';
if ( $args{'sshport'} ) {
$sshport = "-p $args{'sshport'}";
if (length $args{'sshcipher'}) {
$args{'sshcipher'} = "-c $args{'sshcipher'}";
}
if (length $args{'sshport'}) {
$args{'sshport'} = "-p $args{'sshport'}";
}
if (length $args{'sshkey'}) {
$args{'sshkey'} = "-i $args{'sshkey'}";
}
my $sshoptions = join " ", map { "-o " . $_ } @{$args{'sshoption'}}; # deref required
# figure out if source and/or target are remote.
if ( $args{'sshkey'} ) {
$sshcmd = "$sshcmd $sshoption $sshcipher $sshport -i $args{'sshkey'}";
}
else {
$sshcmd = "$sshcmd $sshoption $sshcipher $sshport";
}
$sshcmd = "$sshcmd $args{'sshcipher'} $sshoptions $args{'sshport'} $args{'sshkey'}";
if ($debug) { print "DEBUG: SSHCMD: $sshcmd\n"; }
my ($sourcehost,$sourcefs,$sourceisroot) = getssh($rawsourcefs);
my ($targethost,$targetfs,$targetisroot) = getssh($rawtargetfs);
my $sourcesudocmd;
my $targetsudocmd;
if ($sourceisroot) { $sourcesudocmd = ''; } else { $sourcesudocmd = $sudocmd; }
if ($targetisroot) { $targetsudocmd = ''; } else { $targetsudocmd = $sudocmd; }
my $sourcesudocmd = $sourceisroot ? '' : $sudocmd;
my $targetsudocmd = $targetisroot ? '' : $sudocmd;
# figure out whether compression, mbuffering, pv
# are available on source, target, local machines.
@ -88,23 +90,23 @@ my %snaps;
## can loop across children separately, for recursive ##
## replication ##
if (! $args{'recursive'}) {
if (!defined $args{'recursive'}) {
syncdataset($sourcehost, $sourcefs, $targethost, $targetfs);
} else {
if ($debug) { print "DEBUG: recursive sync of $sourcefs.\n"; }
my @datasets = getchilddatasets($sourcehost, $sourcefs, $sourceisroot);
foreach my $dataset(@datasets) {
foreach my $dataset(@datasets) {
$dataset =~ s/$sourcefs//;
chomp $dataset;
my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset;
# print "syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); \n";
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs);
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs);
}
}
# close SSH sockets for master connections as applicable
if ($sourcehost ne '') {
if ($sourcehost ne '') {
open FH, "$sshcmd $sourcehost -O exit 2>&1 |";
close FH;
}
@ -123,7 +125,7 @@ exit 0;
sub getchilddatasets {
my ($rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($rhost ne '') { $rhost = "$sshcmd $rhost"; }
@ -147,35 +149,39 @@ sub syncdataset {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
return 0;
}
# does the target filesystem exist yet?
my $targetexists = targetexists($targethost,$targetfs,$targetisroot);
# build hashes of the snaps on the source and target filesystems.
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
if ($targetexists) {
if ($targetexists) {
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps);
}
if ($args{'dumpsnaps'}) { print "merged snapshot list of $targetfs: \n"; dumphash(\%snaps); print "\n\n\n"; }
if (defined $args{'dumpsnaps'}) {
print "merged snapshot list of $targetfs: \n";
dumphash(\%snaps);
print "\n\n\n";
}
# create a new syncoid snapshot on the source filesystem.
my $newsyncsnap;
if (!defined ($args{'no-sync-snap'}) ) {
if (!defined $args{'no-sync-snap'}) {
$newsyncsnap = newsyncsnap($sourcehost,$sourcefs,$sourceisroot);
} else {
# we don't want sync snapshots created, so use the newest snapshot we can find.
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
if ($newsyncsnap eq 0) {
if ($newsyncsnap eq 0) {
warn "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.\n";
return 0;
return 0;
}
}
# there is currently (2014-09-01) a bug in ZFS on Linux
# that causes readonly to always show on if it's EVER
# been turned on... even when it's off... unless and
@ -184,23 +190,23 @@ sub syncdataset {
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
#my $originaltargetreadonly;
# sync 'em up.
if (! $targetexists) {
# do an initial sync from the oldest source snapshot
# THEN do an -I to the newest
if ($debug) {
if ($debug) {
if (!defined ($args{'no-stream'}) ) {
print "DEBUG: target $targetfs does not exist. Finding oldest available snapshot on source $sourcefs ...\n";
print "DEBUG: target $targetfs does not exist. Finding oldest available snapshot on source $sourcefs ...\n";
} else {
print "DEBUG: target $targetfs does not exist, and --no-stream selected. Finding newest available snapshot on source $sourcefs ...\n";
}
}
my $oldestsnap = getoldestsnapshot(\%snaps);
if (! $oldestsnap) {
if (! $oldestsnap) {
# getoldestsnapshot() returned false, so use new sync snapshot
if ($debug) { print "DEBUG: getoldestsnapshot() returned false, so using $newsyncsnap.\n"; }
$oldestsnap = $newsyncsnap;
$oldestsnap = $newsyncsnap;
}
# if --no-stream is specified, our full needs to be the newest snapshot, not the oldest.
@ -213,67 +219,67 @@ sub syncdataset {
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) {
if (!$quiet) {
if (!defined ($args{'no-stream'}) ) {
print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
} else {
print "INFO: --no-stream selected; sending newest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
print "INFO: --no-stream selected; sending newest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
}
}
if ($debug) { print "DEBUG: $synccmd\n"; }
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
return 0;
}
system($synccmd) == 0
system($synccmd) == 0
or die "CRITICAL ERROR: $synccmd failed: $?";
# now do an -I to the new sync snapshot, assuming there were any snapshots
# other than the new sync snapshot to begin with, of course - and that we
# aren't invoked with --no-stream, in which case a full of the newest snap
# available was all we needed to do
if (!defined ($args{'no-stream'}) && ($oldestsnap ne $newsyncsnap) ) {
# get current readonly status of target, then set it to on during sync
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
# $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
$sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefs\@$oldestsnap $sourcefs\@$newsyncsnap";
$pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
$disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
$synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
return 0;
}
if (!$quiet) { print "INFO: Updating new target filesystem with incremental $sourcefs\@$oldestsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
if ($oldestsnap ne $newsyncsnap) {
system($synccmd) == 0
system($synccmd) == 0
or warn "CRITICAL ERROR: $synccmd failed: $?";
return 0;
} else {
if (!$quiet) { print "INFO: no incremental sync needed; $oldestsnap is already the newest available snapshot.\n"; }
}
# restore original readonly value to target after sync complete
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
}
} else {
# find most recent matching snapshot and do an -I
# to the new snapshot
# get current readonly status of target, then set it to on during sync
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
@ -281,20 +287,20 @@ sub syncdataset {
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
my $matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, $targetsize, \%snaps);
if (! $matchingsnap) {
# no matching snapshot; we whined piteously already, but let's go ahead and return false
# now in case more child datasets need replication.
return 0;
}
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
return 0;
}
if ($matchingsnap eq $newsyncsnap) {
# barf some text but don't touch the filesystem
if (!$quiet) { print "INFO: no snapshots on source newer than $newsyncsnap on target. Nothing to do, not syncing.\n"; }
@ -308,147 +314,92 @@ sub syncdataset {
if ($debug) { print "$targetsudocmd $zfscmd rollback -R $targetfs\@$matchingsnap\n"; }
system ("$targetsudocmd $zfscmd rollback -R $targetfs\@$matchingsnap");
}
my $sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefs\@$matchingsnap $sourcefs\@$newsyncsnap";
my $recvcmd = "$targetsudocmd $zfscmd receive -F $targetfs";
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
system("$synccmd") == 0
system("$synccmd") == 0
or die "CRITICAL ERROR: $synccmd failed: $?";
# restore original readonly value to target after sync complete
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
#setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
#setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
}
}
# prune obsolete sync snaps on source and target.
pruneoldsyncsnaps($sourcehost,$sourcefs,$newsyncsnap,$sourceisroot,keys %{ $snaps{'source'}});
pruneoldsyncsnaps($targethost,$targetfs,$newsyncsnap,$targetisroot,keys %{ $snaps{'target'}});
} # end syncdataset()
sub compressargset {
my ($value) = @_;
my $DEFAULT_COMPRESSION = 'lzo';
my %COMPRESS_ARGS = (
'none' => {
rawcmd => '',
args => '',
decomrawcmd => '',
decomargs => '',
},
'gzip' => {
rawcmd => '/bin/gzip',
args => '-3',
decomrawcmd => '/bin/zcat',
decomargs => '',
},
'pigz-fast' => {
rawcmd => '/usr/bin/pigz',
args => '-3',
decomrawcmd => '/usr/bin/pigz',
decomargs => '-dc',
},
'pigz-slow' => {
rawcmd => '/usr/bin/pigz',
args => '-9',
decomrawcmd => '/usr/bin/pigz',
decomargs => '-dc',
},
'lzo' => {
rawcmd => '/usr/bin/lzop',
args => '',
decomrawcmd => '/usr/bin/lzop',
decomargs => '-dfc',
},
'lz4-fast' => {
rawcmd => '/usr/bin/lz4',
args => '-1',
decomrawcmd => '/usr/bin/lz4',
decomargs => '-dc',
},
'lz4-slow' => {
rawcmd => '/usr/bin/lz4',
args => '-9',
decomrawcmd => '/usr/bin/lz4',
decomargs => '-dc',
},
);
sub getargs {
my @args = @_;
my %args;
my %novaluearg;
my %validarg;
push my @validargs, ('debug','nocommandchecks','version','monitor-version','compress','c','o','source-bwlimit','target-bwlimit','dumpsnaps','recursive','r','sshkey','sshport','quiet','no-stream','no-sync-snap');
foreach my $item (@validargs) { $validarg{$item} = 1; }
push my @novalueargs, ('debug','nocommandchecks','version','monitor-version','dumpsnaps','recursive','r','quiet','no-stream','no-sync-snap');
foreach my $item (@novalueargs) { $novaluearg{$item} = 1; }
while (my $rawarg = shift(@args)) {
my $arg = $rawarg;
my $argvalue = '';
if ($rawarg =~ /=/) {
# user specified the value for a CLI argument with =
# instead of with blank space. separate appropriately.
$argvalue = $arg;
$arg =~ s/=.*$//;
$argvalue =~ s/^.*=//;
}
if ($rawarg =~ /^--/) {
# doubledash arg
$arg =~ s/^--//;
if (! $validarg{$arg}) { die "ERROR: don't understand argument $rawarg.\n"; }
if ($novaluearg{$arg}) {
$args{$arg} = 1;
} else {
# if this CLI arg takes a user-specified value and
# we don't already have it, then the user must have
# specified with a space, so pull in the next value
# from the array as this value rather than as the
# next argument.
if ($argvalue eq '') { $argvalue = shift(@args); }
$args{$arg} = $argvalue;
}
} elsif ($arg =~ /^-/) {
# singledash arg
$arg =~ s/^-//;
if (! $validarg{$arg}) { die "ERROR: don't understand argument $rawarg.\n"; }
if ($novaluearg{$arg}) {
$args{$arg} = 1;
} else {
# if this CLI arg takes a user-specified value and
# we don't already have it, then the user must have
# specified with a space, so pull in the next value
# from the array as this value rather than as the
# next argument.
if ($argvalue eq '') { $argvalue = shift(@args); }
$args{$arg} = $argvalue;
}
} else {
# bare arg
if (defined $args{'source'}) {
if (! defined $args{'target'}) {
$args{'target'} = $arg;
} else {
die "ERROR: don't know what to do with third bare argument $rawarg.\n";
}
} else {
$args{'source'} = $arg;
}
}
if ($value eq 'default') {
$value = $DEFAULT_COMPRESSION;
} elsif (!(grep $value eq $_, ('gzip', 'pigz-fast', 'pigz-slow', 'lzo', 'default', 'none'))) {
warn "Unrecognised compression value $value, defaulting to $DEFAULT_COMPRESSION";
$value = $DEFAULT_COMPRESSION;
}
if (defined $args{'source-bwlimit'}) { $args{'source-bwlimit'} = "-R $args{'source-bwlimit'}"; } else { $args{'source-bwlimit'} = ''; }
if (defined $args{'target-bwlimit'}) { $args{'target-bwlimit'} = "-r $args{'target-bwlimit'}"; } else { $args{'target-bwlimit'} = ''; }
if (defined $args{'no-stream'}) { $args{'streamarg'} = '-i'; } else { $args{'streamarg'} = '-I'; }
if ($args{'r'}) { $args{'recursive'} = $args{'r'}; }
if (!defined $args{'compress'}) { $args{'compress'} = 'default'; }
if ($args{'compress'} eq 'gzip') {
$args{'rawcompresscmd'} = '/bin/gzip';
$args{'compressargs'} = '-3';
$args{'rawdecompresscmd'} = '/bin/zcat';
$args{'decompressargs'} = '';
} elsif ( ($args{'compress'} eq 'pigz-fast')) {
$args{'rawcompresscmd'} = '/usr/bin/pigz';
$args{'compressargs'} = '-3';
$args{'rawdecompresscmd'} = '/usr/bin/pigz';
$args{'decompressargs'} = '-dc';
} elsif ( ($args{'compress'} eq 'pigz-slow')) {
$args{'rawcompresscmd'} = '/usr/bin/pigz';
$args{'compressargs'} = '-9';
$args{'rawdecompresscmd'} = '/usr/bin/pigz';
$args{'decompressargs'} = '-dc';
} elsif ( ($args{'compress'} eq 'lz4-fast')) {
$args{'rawcompresscmd'} = '/usr/bin/lz4';
$args{'compressargs'} = '-1';
$args{'rawdecompresscmd'} = '/usr/bin/lz4';
$args{'decompressargs'} = '-dc';
} elsif ( ($args{'compress'} eq 'lz4-slow')) {
$args{'rawcompresscmd'} = '/usr/bin/lz4';
$args{'compressargs'} = '-9';
$args{'rawdecompresscmd'} = '/usr/bin/lz4';
$args{'decompressargs'} = '-dc';
} elsif ( ($args{'compress'} eq 'lzo') || ($args{'compress'} eq 'default') ) {
$args{'rawcompresscmd'} = '/usr/bin/lzop';
$args{'compressargs'} = '';
$args{'rawdecompresscmd'} = '/usr/bin/lzop';
$args{'decompressargs'} = '-dfc';
} else {
$args{'rawcompresscmd'} = '';
$args{'compressargs'} = '';
$args{'rawdecompresscmd'} = '';
$args{'decompressargs'} = '';
}
$args{'compresscmd'} = "$args{'rawcompresscmd'} $args{'compressargs'}";
$args{'decompresscmd'} = "$args{'rawdecompresscmd'} $args{'decompressargs'}";
return %args;
my %comargs = %{$COMPRESS_ARGS{$value}}; # copy
$comargs{'compress'} = $value;
$comargs{'cmd'} = "$comargs{'rawcmd'} $comargs{'args'}";
$comargs{'decomcmd'} = "$comargs{'decomrawcmd'} $comargs{'decomargs'}";
return \%comargs;
}
sub checkcommands {
@ -460,14 +411,14 @@ sub checkcommands {
my $targetssh;
# if --nocommandchecks then assume everything's available and return
if ($args{'nocommandchecks'}) {
if ($args{'nocommandchecks'}) {
if ($debug) { print "DEBUG: not checking for command availability due to --nocommandchecks switch.\n"; }
$avail{'compress'} = 1;
$avail{'localpv'} = 1;
$avail{'localmbuffer'} = 1;
$avail{'sourcembuffer'} = 1;
$avail{'targetmbuffer'} = 1;
return %avail;
return %avail;
}
if (!defined $sourcehost) { $sourcehost = ''; }
@ -476,37 +427,28 @@ sub checkcommands {
if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; } else { $sourcessh = ''; }
if ($targethost ne '') { $targetssh = "$sshcmd $targethost"; } else { $targetssh = ''; }
# if raw compress command is null, we must have specified no compression. otherwise,
# if raw compress command is null, we must have specified no compression. otherwise,
# make sure that compression is available everywhere we need it
if ($args{'rawcompresscmd'} eq '') {
$avail{'sourcecompress'} = 0;
$avail{'sourcecompress'} = 0;
$avail{'localcompress'} = 0;
if ($args{'compress'} eq 'none' ||
$args{'compress'} eq 'no' ||
$args{'compress'} eq '0') {
if ($debug) { print "DEBUG: compression forced off from command line arguments.\n"; }
} else {
print "WARN: value $args{'compress'} for argument --compress not understood, proceeding without compression.\n";
}
if ($compressargs{'compress'} eq 'none') {
if ($debug) { print "DEBUG: compression forced off from command line arguments.\n"; }
} else {
if ($debug) { print "DEBUG: checking availability of $args{'rawcompresscmd'} on source...\n"; }
$avail{'sourcecompress'} = `$sourcessh $lscmd $args{'rawcompresscmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $args{'rawcompresscmd'} on target...\n"; }
$avail{'targetcompress'} = `$targetssh $lscmd $args{'rawcompresscmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $args{'rawcompresscmd'} on local machine...\n"; }
$avail{'localcompress'} = `$lscmd $args{'rawcompresscmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on source...\n"; }
$avail{'sourcecompress'} = `$sourcessh $lscmd $compressargs{'rawcmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on target...\n"; }
$avail{'targetcompress'} = `$targetssh $lscmd $compressargs{'rawcmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on local machine...\n"; }
$avail{'localcompress'} = `$lscmd $compressargs{'rawcmd'} 2>/dev/null`;
}
my ($s,$t);
if ($sourcehost eq '') {
if ($sourcehost eq '') {
$s = '[local machine]'
} else {
$s = $sourcehost;
$s =~ s/^\S*\@//;
$s = "ssh:$s";
}
if ($targethost eq '') {
if ($targethost eq '') {
$t = '[local machine]'
} else {
$t = $targethost;
@ -520,15 +462,15 @@ sub checkcommands {
if (!defined $avail{'targetmbuffer'}) { $avail{'targetmbuffer'} = ''; }
if ($avail{'sourcecompress'} eq '') {
if ($args{'rawcompresscmd'} ne '') {
print "WARN: $args{'compresscmd'} not available on source $s- sync will continue without compression.\n";
if ($avail{'sourcecompress'} eq '') {
if ($compressargs{'rawcmd'} ne '') {
print "WARN: $compressargs{'rawcmd'} not available on source $s- sync will continue without compression.\n";
}
$avail{'compress'} = 0;
}
if ($avail{'targetcompress'} eq '') {
if ($args{'rawcompresscmd'} ne '') {
print "WARN: $args{'compresscmd'} not available on target $t - sync will continue without compression.\n";
if ($compressargs{'rawcmd'} ne '') {
print "WARN: $compressargs{'rawcmd'} not available on target $t - sync will continue without compression.\n";
}
$avail{'compress'} = 0;
}
@ -539,9 +481,9 @@ sub checkcommands {
}
# corner case - if source AND target are BOTH remote, we have to check for local compress too
if ($sourcehost ne '' && $targethost ne '' && $avail{'localcompress'} eq '') {
if ($args{'rawcompresscmd'} ne '') {
print "WARN: $args{'compresscmd'} not available on local machine - sync will continue without compression.\n";
if ($sourcehost ne '' && $targethost ne '' && $avail{'localcompress'} eq '') {
if ($compressargs{'rawcmd'} ne '') {
print "WARN: $compressargs{'rawcmd'} not available on local machine - sync will continue without compression.\n";
}
$avail{'compress'} = 0;
}
@ -582,7 +524,7 @@ sub checkcommands {
} else {
$avail{'localpv'} = 1;
}
return %avail;
}
@ -697,10 +639,10 @@ sub buildsynccmd {
$synccmd = "$sendcmd |";
# avoid confusion - accept either source-bwlimit or target-bwlimit as the bandwidth limiting option here
my $bwlimit = '';
if (defined $args{'source-bwlimit'}) {
if (length $args{'bwlimit'}) {
$bwlimit = $args{'source-bwlimit'};
} elsif (defined $args{'target-bwlimit'}) {
$bwlimit = $args{'target-bwlimit'};
} elsif (length $args{'target-bwlimit'}) {
$bwlimit = $args{'target-bwlimit'};
}
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $bwlimit $mbufferoptions |"; }
@ -708,40 +650,40 @@ sub buildsynccmd {
$synccmd .= " $recvcmd";
} elsif ($sourcehost eq '') {
# local source, remote target.
#$synccmd = "$sendcmd | $pvcmd | $args{'compresscmd'} | $mbuffercmd | $sshcmd $targethost '$args{'decompresscmd'} | $mbuffercmd | $recvcmd'";
#$synccmd = "$sendcmd | $pvcmd | $compressargs{'cmd'} | $mbuffercmd | $sshcmd $targethost '$compressargs{'decomcmd'} | $mbuffercmd | $recvcmd'";
$synccmd = "$sendcmd |";
if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd -s $pvsize |"; }
if ($avail{'compress'}) { $synccmd .= " $args{'compresscmd'} |"; }
if ($avail{'compress'}) { $synccmd .= " $compressargs{'cmd'} |"; }
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $args{'source-bwlimit'} $mbufferoptions |"; }
$synccmd .= " $sshcmd $targethost '";
if ($avail{'targetmbuffer'}) { $synccmd .= " $mbuffercmd $args{'target-bwlimit'} $mbufferoptions |"; }
if ($avail{'compress'}) { $synccmd .= " $args{'decompresscmd'} |"; }
if ($avail{'compress'}) { $synccmd .= " $compressargs{'decomcmd'} |"; }
$synccmd .= " $recvcmd'";
} elsif ($targethost eq '') {
# remote source, local target.
#$synccmd = "$sshcmd $sourcehost '$sendcmd | $args{'compresscmd'} | $mbuffercmd' | $args{'decompresscmd'} | $mbuffercmd | $pvcmd | $recvcmd";
#$synccmd = "$sshcmd $sourcehost '$sendcmd | $compressargs{'cmd'} | $mbuffercmd' | $args{'decompress'}{'cmd'} | $mbuffercmd | $pvcmd | $recvcmd";
$synccmd = "$sshcmd $sourcehost '$sendcmd";
if ($avail{'compress'}) { $synccmd .= " | $args{'compresscmd'}"; }
if ($avail{'compress'}) { $synccmd .= " | $compressargs{'cmd'}"; }
if ($avail{'sourcembuffer'}) { $synccmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; }
$synccmd .= "' | ";
if ($avail{'targetmbuffer'}) { $synccmd .= "$mbuffercmd $args{'target-bwlimit'} $mbufferoptions | "; }
if ($avail{'compress'}) { $synccmd .= "$args{'decompresscmd'} | "; }
if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; }
if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd -s $pvsize | "; }
$synccmd .= "$recvcmd";
} else {
#remote source, remote target... weird, but whatever, I'm not here to judge you.
#$synccmd = "$sshcmd $sourcehost '$sendcmd | $args{'compresscmd'} | $mbuffercmd' | $args{'decompresscmd'} | $pvcmd | $args{'compresscmd'} | $mbuffercmd | $sshcmd $targethost '$args{'decompresscmd'} | $mbuffercmd | $recvcmd'";
#$synccmd = "$sshcmd $sourcehost '$sendcmd | $compressargs{'cmd'} | $mbuffercmd' | $compressargs{'decomcmd'} | $pvcmd | $compressargs{'cmd'} | $mbuffercmd | $sshcmd $targethost '$compressargs{'decomcmd'} | $mbuffercmd | $recvcmd'";
$synccmd = "$sshcmd $sourcehost '$sendcmd";
if ($avail{'compress'}) { $synccmd .= " | $args{'compresscmd'}"; }
if ($avail{'compress'}) { $synccmd .= " | $compressargs{'cmd'}"; }
if ($avail{'sourcembuffer'}) { $synccmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; }
$synccmd .= "' | ";
if ($avail{'compress'}) { $synccmd .= "$args{'decompresscmd'} | "; }
if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; }
if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd -s $pvsize | "; }
if ($avail{'compress'}) { $synccmd .= "$args{'compresscmd'} | "; }
if ($avail{'compress'}) { $synccmd .= "$compressargs{'cmd'} | "; }
if ($avail{'localmbuffer'}) { $synccmd .= "$mbuffercmd $mbufferoptions | "; }
$synccmd .= "$sshcmd $targethost '";
if ($avail{'targetmbuffer'}) { $synccmd .= "$mbuffercmd $args{'target-bwlimit'} $mbufferoptions | "; }
if ($avail{'compress'}) { $synccmd .= "$args{'decompresscmd'} | "; }
if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; }
$synccmd .= "$recvcmd'";
}
return $synccmd;
@ -758,7 +700,7 @@ sub pruneoldsyncsnaps {
my @prunesnaps;
# only prune snaps beginning with syncoid and our own hostname
foreach my $snap(@snaps) {
foreach my $snap(@snaps) {
if ($snap =~ /^syncoid_$hostid/) {
# no matter what, we categorically refuse to
# prune the new sync snap we created for this run
@ -781,7 +723,7 @@ sub pruneoldsyncsnaps {
if ($rhost ne '') { $prunecmd = '"' . $prunecmd . '"'; }
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
system("$rhost $prunecmd") == 0
system("$rhost $prunecmd") == 0
or warn "CRITICAL ERROR: $rhost $prunecmd failed: $?";
$prunecmd = '';
$counter = 0;
@ -789,13 +731,13 @@ sub pruneoldsyncsnaps {
}
# if we still have some prune commands stacked up after finishing
# the loop, commit 'em now
if ($counter) {
$prunecmd =~ s/\; $//;
if ($counter) {
$prunecmd =~ s/\; $//;
if ($rhost ne '') { $prunecmd = '"' . $prunecmd . '"'; }
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
system("$rhost $prunecmd") == 0
or warn "WARNING: $rhost $prunecmd failed: $?";
system("$rhost $prunecmd") == 0
or warn "WARNING: $rhost $prunecmd failed: $?";
}
return;
}
@ -817,7 +759,7 @@ sub getmatchingsnapshot {
print " Replication to target would require destroying existing\n";
print " target. Cowardly refusing to destroy your existing target.\n\n";
# experience tells me we need a mollyguard for people who try to
# experience tells me we need a mollyguard for people who try to
# zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ...
if ( $targetsize < (64*1024*1024) ) {
@ -838,7 +780,7 @@ sub newsyncsnap {
my %date = getdate();
my $snapname = "syncoid\_$hostid\_$date{'stamp'}";
my $snapcmd = "$rhost $mysudocmd $zfscmd snapshot $fs\@$snapname\n";
system($snapcmd) == 0
system($snapcmd) == 0
or die "CRITICAL ERROR: $snapcmd failed: $?";
return $snapname;
}
@ -875,7 +817,7 @@ sub getssh {
if ($remoteuser eq 'root') { $isroot = 1; } else { $isroot = 0; }
# now we need to establish a persistent master SSH connection
$socket = "/tmp/syncoid-$remoteuser-$rhost-" . time();
open FH, "$sshcmd -M -S $socket -o ControlPersist=1m $sshport $rhost exit |";
open FH, "$sshcmd -M -S $socket -o ControlPersist=1m $args{'sshport'} $rhost exit |";
close FH;
$rhost = "-S $socket $rhost";
} else {
@ -936,7 +878,7 @@ sub getsnaps() {
}
sub getsendsize {
sub getsendsize {
my ($sourcehost,$snap1,$snap2,$isroot) = @_;
my $mysudocmd;
@ -962,9 +904,9 @@ sub getsendsize {
close FH;
my $exit = $?;
# process sendsize: last line of multi-line output is
# size of proposed xfer in bytes, but we need to remove
# human-readable crap from it
# process sendsize: last line of multi-line output is
# size of proposed xfer in bytes, but we need to remove
# human-readable crap from it
my $sendsize = pop(@rawsize);
$sendsize =~ s/^size\s*//;
chomp $sendsize;
@ -974,8 +916,8 @@ sub getsendsize {
if ($debug) { print "DEBUG: sendsize = $sendsize\n"; }
if ($sendsize eq '' || $exit != 0) {
$sendsize = '0';
} elsif ($sendsize < 4096) {
$sendsize = 4096;
} elsif ($sendsize < 4096) {
$sendsize = 4096;
}
return $sendsize;
}
@ -995,4 +937,40 @@ sub getdate {
return %date;
}
__END__
=head1 NAME
syncoid - ZFS snapshot replication tool
=head1 SYNOPSIS
syncoid [options]... SOURCE TARGET
or syncoid [options]... SOURCE [USER@]HOST:TARGET
or syncoid [options]... [USER@]HOST:SOURCE [TARGET]
or syncoid [options]... [USER@]HOST:SOURCE [USER@]HOST:TARGET
SOURCE Source ZFS dataset. Can be either local or remote
TARGET Target ZFS dataset. Can be either local or remote
Options:
--compress=FORMAT Compresses data during transfer. Currently accepted options are gzip, pigz-fast, pigz-slow, lzo (default) & none
--recursive|r Also transfers child datasets
--source-bwlimit=<limit k|m|g|t> Bandwidth limit on the source transfer
--target-bwlimit=<limit k|m|g|t> Bandwidth limit on the target transfer
--no-stream Replicates using newest snapshot instead of intermediates
--no-sync-snap Does not create new snapshot, only transfers existing
--sshkey=FILE Specifies a ssh public key to use to connect
--sshport=PORT Connects to remote on a particular port
--sshcipher|c=CIPHER Passes CIPHER to ssh to use a particular cipher set
--sshoption|o=OPTION Passes OPTION to ssh for remote usage. Can be specified multiple times
--help Prints this helptext
--verbose Prints the version number
--debug Prints out a lot of additional information during a syncoid run
--monitor-version Currently does nothing
--quiet Suppresses non-error output
--dumpsnaps Dumps a list of snapshots during the run
--no-command-checks Do not check command existence before attempting transfer. Not recommended