removed trailing whitespace
This commit is contained in:
parent
34e4c248bc
commit
7c09666637
28
CHANGELIST
28
CHANGELIST
|
@ -2,21 +2,21 @@
|
|||
allows sanoid to continue taking/removing other snapshots not affected by
|
||||
whatever lock prevented the first from being taken or removed
|
||||
|
||||
1.4.16 merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
|
||||
1.4.16 merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
|
||||
off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's
|
||||
update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
|
||||
encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
|
||||
update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
|
||||
encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
|
||||
@LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor.
|
||||
|
||||
1.4.15 merged @hrast01's -o option to pass ssh CLI options through. Currently only supports a single -o=option argument -
|
||||
in the near future, need to add some simple parsing to expand -o=option1,option2 on the CLI to
|
||||
in the near future, need to add some simple parsing to expand -o=option1,option2 on the CLI to
|
||||
-o option1 -o option2 as passed to SSH.
|
||||
|
||||
1.4.14 fixed significant regression in syncoid - now pulls creation AND guid on each snap; sorts by
|
||||
creation and matches by guid. regression reported in #112 by @da-me, thank you!
|
||||
|
||||
1.4.13 Syncoid will now continue trying to replicate other child datasets after one dataset fails replication
|
||||
when called recursively. Eg syncoid -r source/parent target/parent when source/parent/child1 has been
|
||||
when called recursively. Eg syncoid -r source/parent target/parent when source/parent/child1 has been
|
||||
deleted and replaced with an imposter will no longer prevent source/parent/child2 from successfully
|
||||
replicating to target/parent/child2. This could still use some cleanup TBH; syncoid SHOULD exit 3
|
||||
if any of these errors happen (to assist detection of errors in scripting) but now would exit 0.
|
||||
|
@ -28,7 +28,7 @@
|
|||
and also paves the way in the future for Syncoid to find matching snapshots even after `zfs rename` on source
|
||||
or target. Thank you Github user @mailinglists35 for the idea!
|
||||
|
||||
1.4.10 added --compress=pigz-fast and --compress=pigz-slow. On a Xeon E3-1231v3, pigz-fast is equivalent compression
|
||||
1.4.10 added --compress=pigz-fast and --compress=pigz-slow. On a Xeon E3-1231v3, pigz-fast is equivalent compression
|
||||
to --compress=gzip but with compressed throughput of 75.2 MiB/s instead of 18.1 MiB/s. pigz-slow is around 5%
|
||||
better compression than compress=gzip with roughly equivalent compressed throughput. Note that pigz-fast produces
|
||||
a whopping 20+% better compression on the test data (a linux boot drive) than lzop does, while still being fast
|
||||
|
@ -38,17 +38,17 @@
|
|||
Default compression remains lzop for SSH transport, with compression automatically set to none if there's no transport
|
||||
(ie syncoid replication from dataset to dataset on the local machine only).
|
||||
|
||||
1.4.9 added -c option to manually specify the SSH cipher used. Must use a cipher supported by both source and target! Thanks
|
||||
1.4.9 added -c option to manually specify the SSH cipher used. Must use a cipher supported by both source and target! Thanks
|
||||
Tamas Papp.
|
||||
|
||||
1.4.8 added --no-stream argument to syncoid: allows use of -i incrementals (do not replicate a full snapshot stream, only a
|
||||
1.4.8 added --no-stream argument to syncoid: allows use of -i incrementals (do not replicate a full snapshot stream, only a
|
||||
direct incremental update from oldest to most recent snapshot) instead of the normal -I incrementals which include
|
||||
all intermediate snapshots.
|
||||
|
||||
added --no-sync-snap, which has syncoid replicate using only the newest PRE-EXISTING snapshot on source,
|
||||
instead of default behavior in which syncoid creates a new, ephemeral syncoid snapshot.
|
||||
|
||||
1.4.7a (syncoid only) added standard invocation output when called without source or target
|
||||
1.4.7a (syncoid only) added standard invocation output when called without source or target
|
||||
as per @rriley and @fajarnugraha suggestions
|
||||
|
||||
1.4.7 reverted Perl shebangs to #!/usr/bin/perl - sorry FreeBSD folks, shebanged to /usr/bin/env perl bare calls to syncoid
|
||||
|
@ -63,7 +63,7 @@
|
|||
|
||||
1.4.6c merged @gusson's pull request to add -sshport argument
|
||||
|
||||
1.4.6b updated default cipherlist for syncoid to
|
||||
1.4.6b updated default cipherlist for syncoid to
|
||||
chacha20-poly1305@openssh.com,arcfour - arcfour isn't supported on
|
||||
newer SSH (in Ubuntu Xenial and FreeBSD), chacha20 isn't supported on
|
||||
some older SSH versions (Ubuntu Precise< I think?)
|
||||
|
@ -85,17 +85,17 @@
|
|||
1.4.3 added SSH persistence to syncoid - using socket speeds up SSH overhead 300%! =)
|
||||
one extra commit to get rid of the "Exit request sent." SSH noise at the end.
|
||||
|
||||
1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name
|
||||
1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name
|
||||
child snapshots to be deleted - thank you Lenz Weber!
|
||||
|
||||
1.4.1 updated check_zpool() in sanoid to parse zpool list properly both pre- and post- ZoL v0.6.4
|
||||
|
||||
1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots.
|
||||
1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots.
|
||||
use: findoid /path/to/file
|
||||
|
||||
1.3.1 whoops - prevent process_children_only from getting set from blank value in defaults
|
||||
|
||||
1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with
|
||||
1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with
|
||||
empty parent datasets at all. also more thoroughly documented features in default config files.
|
||||
|
||||
1.2.0 added monitor_children_only parameter to sanoid.conf for use with recursive definitions - in cases where container dataset is kept empty
|
||||
|
@ -115,7 +115,7 @@
|
|||
|
||||
1.0.15 updated syncoid to accept compression engine flags - --compress=lzo|gzip|none
|
||||
|
||||
1.0.14 updated syncoid to reduce output when fetching snapshot list - thank you github user @0xFate.
|
||||
1.0.14 updated syncoid to reduce output when fetching snapshot list - thank you github user @0xFate.
|
||||
|
||||
1.0.13 removed monitor_version again - sorry for the feature instability, forgot I removed it in the first place because I didn't like pulling
|
||||
in so many dependencies for such a trivial feature
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
FreeBSD users will need to change the Perl shebangs at the top of the executables from #!/usr/bin/perl
|
||||
FreeBSD users will need to change the Perl shebangs at the top of the executables from #!/usr/bin/perl
|
||||
to #!/usr/local/bin/perl in most cases.
|
||||
|
||||
Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly
|
||||
from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux
|
||||
systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than
|
||||
Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly
|
||||
from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux
|
||||
systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than
|
||||
the other way around. =)
|
||||
|
||||
If you don't want to have to change the shebangs, your other option is to drop a symlink on your system:
|
||||
|
|
11
INSTALL
11
INSTALL
|
@ -1,10 +1,10 @@
|
|||
SYNCOID
|
||||
-------
|
||||
Syncoid depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced
|
||||
Syncoid depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced
|
||||
functionality in the absence of any or all of the above. SSH is only required
|
||||
for remote synchronization. On newer FreeBSD and Ubuntu Xenial
|
||||
chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the
|
||||
default for SSH transport since v1.4.6. Syncoid runs will fail if one of them
|
||||
for remote synchronization. On newer FreeBSD and Ubuntu Xenial
|
||||
chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the
|
||||
default for SSH transport since v1.4.6. Syncoid runs will fail if one of them
|
||||
is not available on either end of the transport.
|
||||
|
||||
On Ubuntu: apt install pv lzop mbuffer
|
||||
|
@ -14,7 +14,7 @@ FreeBSD notes: FreeBSD may place pv and lzop in somewhere other than
|
|||
/usr/bin ; syncoid currently does not check path.
|
||||
|
||||
Simplest path workaround is symlinks, eg:
|
||||
root@bsd:~# ln -s /usr/local/bin/lzop /usr/bin/lzop
|
||||
root@bsd:~# ln -s /usr/local/bin/lzop /usr/bin/lzop
|
||||
or similar, as appropriate, to create links in /usr/bin
|
||||
to wherever the utilities actually are on your system.
|
||||
|
||||
|
@ -27,4 +27,3 @@ strongly recommends using your distribution's repositories instead.
|
|||
|
||||
On Ubuntu: apt install libconfig-inifiles-perl
|
||||
On FreeBSD: pkg install p5-Config-Inifiles
|
||||
|
||||
|
|
1
LICENSE
1
LICENSE
|
@ -672,4 +672,3 @@ may consider it more useful to permit linking proprietary applications with
|
|||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
||||
|
||||
|
|
10
README.md
10
README.md
|
@ -49,19 +49,19 @@ Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 da
|
|||
+ --take-snapshots
|
||||
|
||||
This will process your sanoid.conf file, create snapshots, but it will NOT purge expired ones. (Note that snapshots taken are atomic in an individual dataset context, <i>not</i> a global context - snapshots of pool/dataset1 and pool/dataset2 will each be internally consistent and atomic, but one may be a few filesystem transactions "newer" than the other.)
|
||||
|
||||
|
||||
+ --prune-snapshots
|
||||
|
||||
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
|
||||
|
||||
|
||||
+ --monitor-snapshots
|
||||
|
||||
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
|
||||
|
||||
|
||||
+ --monitor-health
|
||||
|
||||
This option is designed to be run by a Nagios monitoring system. It reports on the health of the zpool your filesystems are on. It only monitors filesystems that are configured in the sanoid.conf file.
|
||||
|
||||
|
||||
+ --force-update
|
||||
|
||||
This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
|
||||
|
@ -161,7 +161,7 @@ Syncoid supports recursive replication (replication of a dataset and all its chi
|
|||
|
||||
+ --quiet
|
||||
|
||||
Supress non-error output.
|
||||
Supress non-error output.
|
||||
|
||||
+ --debug
|
||||
|
||||
|
|
16
findoid
16
findoid
|
@ -1,6 +1,6 @@
|
|||
#!/usr/bin/perl
|
||||
|
||||
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
|
||||
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
|
||||
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
|
||||
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
|
||||
|
||||
|
@ -71,13 +71,13 @@ sub getversions {
|
|||
$duplicate = 1;
|
||||
}
|
||||
}
|
||||
if (! $duplicate) {
|
||||
if (! $duplicate) {
|
||||
$versions{$filename}{'size'} = $size;
|
||||
$versions{$filename}{'mtime'} = $mtime;
|
||||
}
|
||||
}
|
||||
|
||||
return %versions;
|
||||
return %versions;
|
||||
}
|
||||
|
||||
sub findsnaps {
|
||||
|
@ -99,19 +99,19 @@ sub findsnaps {
|
|||
}
|
||||
|
||||
sub getdataset {
|
||||
|
||||
|
||||
my ($path) = @_;
|
||||
|
||||
open FH, "$zfs list -Ho mountpoint |";
|
||||
my @datasets = <FH>;
|
||||
close FH;
|
||||
|
||||
|
||||
my @matchingdatasets;
|
||||
foreach my $dataset (@datasets) {
|
||||
chomp $dataset;
|
||||
if ( $path =~ /^$dataset/ ) { push @matchingdatasets, $dataset; }
|
||||
}
|
||||
|
||||
|
||||
my $bestmatch = '';
|
||||
foreach my $dataset (@matchingdatasets) {
|
||||
if ( length $dataset > length $bestmatch ) { $bestmatch = $dataset; }
|
||||
|
@ -150,7 +150,7 @@ sub getargs {
|
|||
# if this CLI arg takes a user-specified value and
|
||||
# we don't already have it, then the user must have
|
||||
# specified with a space, so pull in the next value
|
||||
# from the array as this value rather than as the
|
||||
# from the array as this value rather than as the
|
||||
# next argument.
|
||||
if ($argvalue eq '') { $argvalue = shift(@args); }
|
||||
$args{$arg} = $argvalue;
|
||||
|
@ -165,7 +165,7 @@ sub getargs {
|
|||
# if this CLI arg takes a user-specified value and
|
||||
# we don't already have it, then the user must have
|
||||
# specified with a space, so pull in the next value
|
||||
# from the array as this value rather than as the
|
||||
# from the array as this value rather than as the
|
||||
# next argument.
|
||||
if ($argvalue eq '') { $argvalue = shift(@args); }
|
||||
$args{$arg} = $argvalue;
|
||||
|
|
181
sanoid
181
sanoid
|
@ -41,7 +41,7 @@ my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath );
|
|||
if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); }
|
||||
if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); }
|
||||
if ($args{'monitor-health'}) { monitor_health(@params); }
|
||||
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
|
||||
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
|
||||
if ($args{'version'}) { print "INFO: Sanoid version: $version\n"; }
|
||||
|
||||
if ($args{'cron'} || $args{'noargs'}) {
|
||||
|
@ -55,7 +55,7 @@ if ($args{'cron'} || $args{'noargs'}) {
|
|||
}
|
||||
|
||||
exit 0;
|
||||
|
||||
|
||||
|
||||
####################################################################################
|
||||
####################################################################################
|
||||
|
@ -93,7 +93,7 @@ sub monitor_health() {
|
|||
sub monitor_snapshots() {
|
||||
|
||||
# nagios plugin format: exit 0,1,2,3 for OK, WARN, CRITICAL, or ERROR.
|
||||
|
||||
|
||||
# check_snapshot_date - test ZFS fs creation timestamp for recentness
|
||||
# accepts arguments: $filesystem, $warn (in seconds elapsed), $crit (in seconds elapsed)
|
||||
|
||||
|
@ -107,14 +107,14 @@ sub monitor_snapshots() {
|
|||
foreach my $section (keys %config) {
|
||||
if ($section =~ /^template/) { next; }
|
||||
if (! $config{$section}{'monitor'}) { next; }
|
||||
if ($config{$section}{'process_children_only'}) { next; }
|
||||
if ($config{$section}{'process_children_only'}) { next; }
|
||||
|
||||
my $path = $config{$section}{'path'};
|
||||
push @paths, $path;
|
||||
|
||||
my @types = ('yearly','monthly','daily','hourly');
|
||||
foreach my $type (@types) {
|
||||
|
||||
|
||||
my $smallerperiod = 0;
|
||||
# we need to set the period length in seconds first
|
||||
if ($type eq 'hourly') { $smallerperiod = 60; }
|
||||
|
@ -131,7 +131,7 @@ sub monitor_snapshots() {
|
|||
my $dispelapsed = displaytime($snapsbytype{$path}{$type}{'newest'});
|
||||
my $dispwarn = displaytime($warn);
|
||||
my $dispcrit = displaytime($crit);
|
||||
if ( $elapsed > $crit || $elapsed == -1) {
|
||||
if ( $elapsed > $crit || $elapsed == -1) {
|
||||
if ($config{$section}{$typecrit} > 0) {
|
||||
if (! $config{$section}{'monitor_dont_crit'}) { $errorlevel = 2; }
|
||||
if ($elapsed == -1) {
|
||||
|
@ -148,17 +148,17 @@ sub monitor_snapshots() {
|
|||
} else {
|
||||
# push @msgs .= "OK: $path\'s newest $type snapshot is $dispelapsed old \n";
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
my @sorted_msgs = sort { lc($a) cmp lc($b) } @msgs;
|
||||
my @sorted_paths = sort { lc($a) cmp lc($b) } @paths;
|
||||
$msg = join (", ", @sorted_msgs);
|
||||
my $paths = join (", ", @sorted_paths);
|
||||
|
||||
|
||||
if ($msg eq '') { $msg = "OK: all monitored datasets \($paths\) have fresh snapshots"; }
|
||||
|
||||
|
||||
print "$msg\n";
|
||||
exit $errorlevel;
|
||||
} # end monitor()
|
||||
|
@ -193,7 +193,7 @@ sub prune_snapshots {
|
|||
elsif ($type eq 'daily') { $period = 60*60*24; }
|
||||
elsif ($type eq 'monthly') { $period = 60*60*24*31; }
|
||||
elsif ($type eq 'yearly') { $period = 60*60*24*365.25; }
|
||||
|
||||
|
||||
my @sorted = split (/\|/,$snapsbytype{$path}{$type}{'sorted'});
|
||||
|
||||
# if we say "daily=30" we really mean "don't keep any dailies more than 30 days old", etc
|
||||
|
@ -258,7 +258,7 @@ sub take_snapshots {
|
|||
foreach my $section (keys %config) {
|
||||
if ($section =~ /^template/) { next; }
|
||||
if (! $config{$section}{'autosnap'}) { next; }
|
||||
if ($config{$section}{'process_children_only'}) { next; }
|
||||
if ($config{$section}{'process_children_only'}) { next; }
|
||||
|
||||
my $path = $config{$section}{'path'};
|
||||
|
||||
|
@ -270,14 +270,14 @@ sub take_snapshots {
|
|||
if (defined $snapsbytype{$path}{$type}{'newest'}) {
|
||||
$newestage = $snapsbytype{$path}{$type}{'newest'};
|
||||
} else{
|
||||
$newestage = 9999999999999999;
|
||||
$newestage = 9999999999999999;
|
||||
}
|
||||
|
||||
# for use with localtime: @preferredtime will be most recent preferred snapshot time in ($sec,$min,$hour,$mon-1,$year) format
|
||||
my @preferredtime;
|
||||
my $lastpreferred;
|
||||
|
||||
if ($type eq 'hourly') {
|
||||
if ($type eq 'hourly') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'hourly_min'};
|
||||
push @preferredtime,$datestamp{'hour'};
|
||||
|
@ -286,7 +286,7 @@ sub take_snapshots {
|
|||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60; } # preferred time is later this hour - so look at last hour's
|
||||
} elsif ($type eq 'daily') {
|
||||
} elsif ($type eq 'daily') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'daily_min'};
|
||||
push @preferredtime,$config{$section}{'daily_hour'};
|
||||
|
@ -295,7 +295,7 @@ sub take_snapshots {
|
|||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24; } # preferred time is later today - so look at yesterday's
|
||||
} elsif ($type eq 'monthly') {
|
||||
} elsif ($type eq 'monthly') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'monthly_min'};
|
||||
push @preferredtime,$config{$section}{'monthly_hour'};
|
||||
|
@ -304,7 +304,7 @@ sub take_snapshots {
|
|||
push @preferredtime,$datestamp{'year'};
|
||||
$lastpreferred = timelocal(@preferredtime);
|
||||
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31; } # preferred time is later this month - so look at last month's
|
||||
} elsif ($type eq 'yearly') {
|
||||
} elsif ($type eq 'yearly') {
|
||||
push @preferredtime,0; # try to hit 0 seconds
|
||||
push @preferredtime,$config{$section}{'yearly_min'};
|
||||
push @preferredtime,$config{$section}{'yearly_hour'};
|
||||
|
@ -317,7 +317,7 @@ sub take_snapshots {
|
|||
|
||||
# reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot
|
||||
my $maxage = time()-$lastpreferred;
|
||||
|
||||
|
||||
if ( $newestage > $maxage ) {
|
||||
# update to most current possible datestamp
|
||||
%datestamp = get_date();
|
||||
|
@ -331,7 +331,7 @@ sub take_snapshots {
|
|||
if ( (scalar(@newsnaps)) > 0) {
|
||||
foreach my $snap ( @newsnaps ) {
|
||||
if ($args{'verbose'}) { print "taking snapshot $snap\n"; }
|
||||
if (!$args{'readonly'}) {
|
||||
if (!$args{'readonly'}) {
|
||||
system($zfs, "snapshot", "$snap") == 0
|
||||
or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
|
||||
# make sure we don't end up with multiple snapshots with the same ctime
|
||||
|
@ -360,9 +360,9 @@ sub blabber {
|
|||
#print Dumper(\%snapsbytype);
|
||||
#print "****** SNAPSBYPATH ******\n";
|
||||
#print Dumper(\%snapsbypath);
|
||||
|
||||
|
||||
print "\n";
|
||||
|
||||
|
||||
foreach my $section (keys %config) {
|
||||
my $path = $config{$section}{'path'};
|
||||
print "Filesystem $path has:\n";
|
||||
|
@ -370,7 +370,7 @@ sub blabber {
|
|||
print "(newest: ";
|
||||
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60);
|
||||
print "$newest hours old)\n";
|
||||
|
||||
|
||||
foreach my $type (keys %{ $snapsbytype{$path} }){
|
||||
print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n";
|
||||
print " desired: $config{$section}{$type}\n";
|
||||
|
@ -380,7 +380,7 @@ sub blabber {
|
|||
}
|
||||
print "\n\n";
|
||||
}
|
||||
|
||||
|
||||
} # end blabber
|
||||
|
||||
|
||||
|
@ -390,7 +390,7 @@ sub blabber {
|
|||
|
||||
|
||||
sub getsnapsbytype {
|
||||
|
||||
|
||||
my ($config, $snaps) = @_;
|
||||
my %snapsbytype;
|
||||
|
||||
|
@ -407,7 +407,7 @@ sub getsnapsbytype {
|
|||
# iterate through snapshots of each type, ordered by creation time of each snapshot within that type
|
||||
foreach my $type (keys %rawsnaps) {
|
||||
$snapsbytype{$path}{$type}{'numsnaps'} = scalar (keys %{ $rawsnaps{$type} });
|
||||
my @sortedsnaps;
|
||||
my @sortedsnaps;
|
||||
foreach my $name (
|
||||
sort { $rawsnaps{$type}{$a} <=> $rawsnaps{$type}{$b} } keys %{ $rawsnaps{$type} }
|
||||
) {
|
||||
|
@ -420,7 +420,7 @@ sub getsnapsbytype {
|
|||
}
|
||||
|
||||
return %snapsbytype;
|
||||
|
||||
|
||||
} # end getsnapsbytype
|
||||
|
||||
|
||||
|
@ -430,22 +430,22 @@ sub getsnapsbytype {
|
|||
|
||||
|
||||
sub getsnapsbypath {
|
||||
|
||||
|
||||
my ($config,$snaps) = @_;
|
||||
my %snapsbypath;
|
||||
|
||||
|
||||
# iterate through each module section - each section is a single ZFS path
|
||||
foreach my $section (keys %config) {
|
||||
my $path = $config{$section}{'path'};
|
||||
$snapsbypath{$path}{'numsnaps'} = scalar (keys %{ $snaps{$path} });
|
||||
|
||||
|
||||
# iterate through snapshots of each type, ordered by creation time of each snapshot within that type
|
||||
my %rawsnaps;
|
||||
foreach my $snapname ( keys %{ $snaps{$path} } ) {
|
||||
$rawsnaps{$path}{$snapname} = $snaps{$path}{$snapname}{'ctime'};
|
||||
}
|
||||
my @sortedsnaps;
|
||||
foreach my $snapname (
|
||||
foreach my $snapname (
|
||||
sort { $rawsnaps{$path}{$a} <=> $rawsnaps{$path}{$b} } keys %{ $rawsnaps{$path} }
|
||||
) {
|
||||
push @sortedsnaps, $snapname;
|
||||
|
@ -454,9 +454,9 @@ sub getsnapsbypath {
|
|||
my $sortedsnaps = join ('|',@sortedsnaps);
|
||||
$snapsbypath{$path}{'sorted'} = $sortedsnaps;
|
||||
}
|
||||
|
||||
|
||||
return %snapsbypath;
|
||||
|
||||
|
||||
} # end getsnapsbypath
|
||||
|
||||
|
||||
|
@ -472,7 +472,7 @@ sub getsnaps {
|
|||
|
||||
my $cache = '/var/cache/sanoidsnapshots.txt';
|
||||
my @rawsnaps;
|
||||
|
||||
|
||||
my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
|
||||
$atime,$mtime,$ctime,$blksize,$blocks)
|
||||
= stat($cache);
|
||||
|
@ -480,17 +480,17 @@ sub getsnaps {
|
|||
if ( $forcecacheupdate || (time() - $mtime) > $cacheTTL ) {
|
||||
if (checklock('sanoid_cacheupdate')) {
|
||||
writelock('sanoid_cacheupdate');
|
||||
if ($args{'verbose'}) {
|
||||
if ($args{'verbose'}) {
|
||||
if ($args{'force-update'}) {
|
||||
print "INFO: cache forcibly expired - updating from zfs list.\n";
|
||||
} else {
|
||||
print "INFO: cache expired - updating from zfs list.\n";
|
||||
print "INFO: cache expired - updating from zfs list.\n";
|
||||
}
|
||||
}
|
||||
open FH, "$zfs get -Hpt snapshot creation |";
|
||||
@rawsnaps = <FH>;
|
||||
close FH;
|
||||
|
||||
|
||||
open FH, "> $cache" or die 'Could not write to $cache!\n';
|
||||
print FH @rawsnaps;
|
||||
close FH;
|
||||
|
@ -538,7 +538,7 @@ sub init {
|
|||
my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only');
|
||||
my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON");
|
||||
my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF");
|
||||
|
||||
|
||||
foreach my $section (keys %ini) {
|
||||
|
||||
# first up - die with honor if unknown parameters are set in any modules or templates by the user.
|
||||
|
@ -549,7 +549,7 @@ sub init {
|
|||
}
|
||||
|
||||
if ($section =~ /^template_/) { next; } # don't process templates directly
|
||||
|
||||
|
||||
# only set defaults on sections that haven't already been initialized - this allows us to override values
|
||||
# for sections directly when they've already been defined recursively, without starting them over from scratch.
|
||||
if (! defined ($config{$section}{'initialized'})) {
|
||||
|
@ -557,17 +557,17 @@ sub init {
|
|||
# set default values from %defaults, which can then be overriden by template
|
||||
# and/or local settings within the module.
|
||||
foreach my $key (keys %{$defaults{'template_default'}}) {
|
||||
if (! ($key =~ /template|recursive|children_only/)) {
|
||||
$config{$section}{$key} = $defaults{'template_default'}{$key};
|
||||
if (! ($key =~ /template|recursive|children_only/)) {
|
||||
$config{$section}{$key} = $defaults{'template_default'}{$key};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# override with values from user-defined default template, if any
|
||||
|
||||
|
||||
foreach my $key (keys %{$ini{'template_default'}}) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; }
|
||||
$config{$section}{$key} = $ini{'template_default'}{$key};
|
||||
$config{$section}{$key} = $ini{'template_default'}{$key};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -582,9 +582,9 @@ sub init {
|
|||
|
||||
my $template = 'template_'.$rawtemplate;
|
||||
foreach my $key (keys %{$ini{$template}}) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if (! ($key =~ /template|recursive/)) {
|
||||
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
|
||||
$config{$section}{$key} = $ini{$template}{$key};
|
||||
$config{$section}{$key} = $ini{$template}{$key};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -597,10 +597,10 @@ sub init {
|
|||
$config{$section}{$key} = $ini{$section}{$key};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# make sure that true values are true and false values are false for any toggled values
|
||||
foreach my $toggle(@toggles) {
|
||||
foreach my $true (@istrue) {
|
||||
foreach my $true (@istrue) {
|
||||
if ($config{$section}{$toggle} eq $true) { $config{$section}{$toggle} = 1; }
|
||||
}
|
||||
foreach my $false (@isfalse) {
|
||||
|
@ -608,8 +608,8 @@ sub init {
|
|||
}
|
||||
}
|
||||
|
||||
# section path is the section name, unless section path has been explicitly defined
|
||||
if (defined ($ini{$section}{'path'})) {
|
||||
# section path is the section name, unless section path has been explicitly defined
|
||||
if (defined ($ini{$section}{'path'})) {
|
||||
$config{$section}{'path'} = $ini{$section}{'path'};
|
||||
} else {
|
||||
$config{$section}{'path'} = $section;
|
||||
|
@ -690,7 +690,7 @@ sub displaytime {
|
|||
|
||||
sub check_zpool() {
|
||||
# check_zfs Nagios plugin for monitoring Sun ZFS zpools
|
||||
# Copyright (c) 2007
|
||||
# Copyright (c) 2007
|
||||
# original Written by Nathan Butcher
|
||||
# adapted for use within Sanoid framework by Jim Salter (2014)
|
||||
#
|
||||
|
@ -709,13 +709,13 @@ sub check_zpool() {
|
|||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
|
||||
|
||||
# Version: 0.9.2
|
||||
# Date : 24th July 2007
|
||||
# This plugin has tested on FreeBSD 7.0-CURRENT and Solaris 10
|
||||
# With a bit of fondling, it could be expanded to recognize other OSes in
|
||||
# future (e.g. if FUSE Linux gets off the ground)
|
||||
|
||||
|
||||
# Verbose levels:-
|
||||
# 1 - Only alert us of zpool health and size stats
|
||||
# 2 - ...also alert us of failed devices when things go bad
|
||||
|
@ -725,14 +725,14 @@ sub check_zpool() {
|
|||
# Example: check_zfs zeepool 1
|
||||
# ZPOOL zeedata : ONLINE {Size:3.97G Used:183K Avail:3.97G Cap:0%}
|
||||
|
||||
|
||||
|
||||
my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2);
|
||||
my $state="UNKNOWN";
|
||||
my $msg="FAILURE";
|
||||
|
||||
|
||||
my $pool=shift;
|
||||
my $verbose=shift;
|
||||
|
||||
|
||||
my $size="";
|
||||
my $used="";
|
||||
my $avail="";
|
||||
|
@ -740,14 +740,14 @@ sub check_zpool() {
|
|||
my $health="";
|
||||
my $dmge="";
|
||||
my $dedup="";
|
||||
|
||||
|
||||
if ($verbose < 1 || $verbose > 3) {
|
||||
print "Verbose levels range from 1-3\n";
|
||||
exit $ERRORS{$state};
|
||||
}
|
||||
|
||||
|
||||
my $statcommand="/sbin/zpool list -o name,size,cap,health,free $pool";
|
||||
|
||||
|
||||
if (! open STAT, "$statcommand|") {
|
||||
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
|
||||
exit $ERRORS{$state};
|
||||
|
@ -756,7 +756,7 @@ sub check_zpool() {
|
|||
# chuck the header line
|
||||
my $header = <STAT>;
|
||||
|
||||
# find and parse the line with values for the pool
|
||||
# find and parse the line with values for the pool
|
||||
while(<STAT>) {
|
||||
chomp;
|
||||
if (/^${pool}\s+/) {
|
||||
|
@ -765,12 +765,12 @@ sub check_zpool() {
|
|||
($name, $size, $cap, $health, $avail) = @row;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# Tony: Debuging
|
||||
# print "Size: $size \t Used: $used \t Avai: $avail \t Cap: $cap \t Health: $health\n";
|
||||
|
||||
|
||||
close(STAT);
|
||||
|
||||
|
||||
## check for valid zpool list response from zpool
|
||||
if (! $health ) {
|
||||
$state = "CRITICAL";
|
||||
|
@ -778,7 +778,7 @@ sub check_zpool() {
|
|||
print $state, " ", $msg;
|
||||
exit ($ERRORS{$state});
|
||||
}
|
||||
|
||||
|
||||
## determine health of zpool and subsequent error status
|
||||
if ($health eq "ONLINE" ) {
|
||||
$state = "OK";
|
||||
|
@ -789,39 +789,39 @@ sub check_zpool() {
|
|||
$state = "CRITICAL";
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
## get more detail on possible device failure
|
||||
## flag to detect section of zpool status involving our zpool
|
||||
my $poolfind=0;
|
||||
|
||||
|
||||
$statcommand="/sbin/zpool status $pool";
|
||||
if (! open STAT, "$statcommand|") {
|
||||
$state = 'CRITICAL';
|
||||
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
|
||||
exit $ERRORS{$state};
|
||||
}
|
||||
|
||||
|
||||
## go through zfs status output to find zpool fses and devices
|
||||
while(<STAT>) {
|
||||
chomp;
|
||||
|
||||
|
||||
if (/^\s${pool}/ && $poolfind==1) {
|
||||
$poolfind=2;
|
||||
next;
|
||||
} elsif ( $poolfind==1 ) {
|
||||
$poolfind=0;
|
||||
}
|
||||
|
||||
|
||||
if (/NAME\s+STATE\s+READ\s+WRITE\s+CKSUM/) {
|
||||
$poolfind=1;
|
||||
}
|
||||
|
||||
|
||||
if ( /^$/ ) {
|
||||
$poolfind=0;
|
||||
}
|
||||
|
||||
|
||||
if ($poolfind == 2) {
|
||||
|
||||
|
||||
## special cases pertaining to full verbose
|
||||
if (/^\sspares/) {
|
||||
next unless $verbose == 3;
|
||||
|
@ -839,33 +839,33 @@ sub check_zpool() {
|
|||
my $perc;
|
||||
my ($sta) = /^\s+\S+\s+(\S+)/;
|
||||
if (/%/) {
|
||||
($perc) = /([0-9]+%)/;
|
||||
($perc) = /([0-9]+%)/;
|
||||
} else {
|
||||
$perc = "working";
|
||||
}
|
||||
$dmge=$dmge . "[REPLACING:${sta} (${perc})]:- ";
|
||||
next;
|
||||
}
|
||||
|
||||
|
||||
## other cases
|
||||
my ($dev, $sta) = /^\s+(\S+)\s+(\S+)/;
|
||||
|
||||
|
||||
## pool online, not degraded thanks to dead/corrupted disk
|
||||
if ($state eq "OK" && $sta eq "UNAVAIL") {
|
||||
$state="WARNING";
|
||||
|
||||
|
||||
## switching to verbose level 2 to explain weirdness
|
||||
if ($verbose == 1) {
|
||||
$verbose =2;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
## no display for verbose level 1
|
||||
next if ($verbose==1);
|
||||
## don't display working devices for verbose level 2
|
||||
next if ($verbose==2 && $state eq "OK");
|
||||
next if ($verbose==2 && ($sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE"));
|
||||
|
||||
|
||||
## show everything else
|
||||
if (/^\s{3}(\S+)/) {
|
||||
$dmge=$dmge . "<" . $dev . ":" . $sta . "> ";
|
||||
|
@ -876,9 +876,9 @@ sub check_zpool() {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
## calling all goats!
|
||||
|
||||
|
||||
$msg = sprintf "ZPOOL %s : %s {Size:%s Free:%s Cap:%s} %s\n", $pool, $health, $size, $avail, $cap, $dmge;
|
||||
$msg = "$state $msg";
|
||||
return ($ERRORS{$state},$msg);
|
||||
|
@ -891,7 +891,7 @@ sub check_zpool() {
|
|||
######################################################################################################
|
||||
|
||||
sub checklock {
|
||||
# take argument $lockname.
|
||||
# take argument $lockname.
|
||||
#
|
||||
# read /var/run/$lockname.lock for a pid on first line and a mutex on second line.
|
||||
#
|
||||
|
@ -905,19 +905,19 @@ sub checklock {
|
|||
#
|
||||
# shorthand - any true return indicates we are clear to lock; a false return indicates
|
||||
# that somebody else already has the lock and therefore we cannot.
|
||||
#
|
||||
#
|
||||
|
||||
my $lockname = shift;
|
||||
my $lockfile = "/var/run/$lockname.lock";
|
||||
|
||||
|
||||
if (! -e $lockfile) {
|
||||
# no lockfile
|
||||
return 1;
|
||||
}
|
||||
|
||||
# lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if
|
||||
# lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if
|
||||
# there's still a process running with that pid and with the same mutex.
|
||||
|
||||
|
||||
open FH, "< $lockfile";
|
||||
my @lock = <FH>;
|
||||
close FH;
|
||||
|
@ -984,14 +984,14 @@ sub writelock {
|
|||
}
|
||||
|
||||
my $pid = $$;
|
||||
|
||||
|
||||
open PL, "$pscmd -p $$ -o args= |";
|
||||
my @processlist = <PL>;
|
||||
close PL;
|
||||
|
||||
|
||||
my $mutex = pop(@processlist);
|
||||
chomp $mutex;
|
||||
|
||||
|
||||
open FH, "> $lockfile";
|
||||
print FH "$pid\n";
|
||||
print FH "$mutex\n";
|
||||
|
@ -1063,7 +1063,7 @@ sub getargs {
|
|||
# if this CLI arg takes a user-specified value and
|
||||
# we don't already have it, then the user must have
|
||||
# specified with a space, so pull in the next value
|
||||
# from the array as this value rather than as the
|
||||
# from the array as this value rather than as the
|
||||
# next argument.
|
||||
if ($argvalue eq '') { $argvalue = shift(@args); }
|
||||
$args{$arg} = $argvalue;
|
||||
|
@ -1077,7 +1077,7 @@ sub getargs {
|
|||
# if this CLI arg takes a user-specified value and
|
||||
# we don't already have it, then the user must have
|
||||
# specified with a space, so pull in the next value
|
||||
# from the array as this value rather than as the
|
||||
# from the array as this value rather than as the
|
||||
# next argument.
|
||||
if ($argvalue eq '') { $argvalue = shift(@args); }
|
||||
$args{$arg} = $argvalue;
|
||||
|
@ -1104,4 +1104,3 @@ sub getchilddatasets {
|
|||
|
||||
return @children;
|
||||
}
|
||||
|
||||
|
|
|
@ -54,13 +54,13 @@
|
|||
monthly = 12
|
||||
yearly = 0
|
||||
|
||||
### don't take new snapshots - snapshots on backup
|
||||
### don't take new snapshots - snapshots on backup
|
||||
### datasets are replicated in from source, not
|
||||
### generated locally
|
||||
autosnap = no
|
||||
|
||||
### monitor hourlies and dailies, but don't warn or
|
||||
### crit until they're over 48h old, since replication
|
||||
### monitor hourlies and dailies, but don't warn or
|
||||
### crit until they're over 48h old, since replication
|
||||
### is typically daily only
|
||||
hourly_warn = 2880
|
||||
hourly_crit = 3600
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
[template_default]
|
||||
|
||||
# these settings don't make sense in a template, but we use the defaults file
|
||||
# as our list of allowable settings also, so they need to be present here even if
|
||||
# as our list of allowable settings also, so they need to be present here even if
|
||||
# unset.
|
||||
path =
|
||||
recursive =
|
||||
|
@ -31,7 +31,7 @@ min_percent_free = 10
|
|||
# We will automatically take snapshots if autosnap is on, at the desired times configured
|
||||
# below (or immediately, if we don't have one since the last preferred time for that type).
|
||||
#
|
||||
# Note that we will not take snapshots for a given type if that type is set to 0 above,
|
||||
# Note that we will not take snapshots for a given type if that type is set to 0 above,
|
||||
# regardless of the autosnap setting - for example, if yearly=0 we will not take yearlies
|
||||
# even if we've defined a preferred time for yearlies and autosnap is on.
|
||||
autosnap = 1;
|
||||
|
@ -63,12 +63,10 @@ monitor = yes
|
|||
monitor_dont_warn = no
|
||||
monitor_dont_crit = no
|
||||
hourly_warn = 90
|
||||
hourly_crit = 360
|
||||
hourly_crit = 360
|
||||
daily_warn = 28
|
||||
daily_crit = 32
|
||||
monthly_warn = 32
|
||||
monthly_crit = 35
|
||||
yearly_warn = 0
|
||||
yearly_crit = 0
|
||||
|
||||
|
||||
|
|
|
@ -2,8 +2,8 @@
|
|||
|
||||
# this is just a cheap way to trigger mutex-based checks for process activity.
|
||||
#
|
||||
# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around
|
||||
# as long as necessary that will show up to any routine that actively does
|
||||
# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around
|
||||
# as long as necessary that will show up to any routine that actively does
|
||||
# something like "ps axo | grep 'zfs receive'" or whatever.
|
||||
|
||||
sleep 99999
|
||||
|
|
142
syncoid
142
syncoid
|
@ -1,6 +1,6 @@
|
|||
#!/usr/bin/perl
|
||||
|
||||
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
|
||||
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
|
||||
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
|
||||
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
|
||||
|
||||
|
@ -93,18 +93,18 @@ if (! $args{'recursive'}) {
|
|||
} else {
|
||||
if ($debug) { print "DEBUG: recursive sync of $sourcefs.\n"; }
|
||||
my @datasets = getchilddatasets($sourcehost, $sourcefs, $sourceisroot);
|
||||
foreach my $dataset(@datasets) {
|
||||
foreach my $dataset(@datasets) {
|
||||
$dataset =~ s/$sourcefs//;
|
||||
chomp $dataset;
|
||||
my $childsourcefs = $sourcefs . $dataset;
|
||||
my $childtargetfs = $targetfs . $dataset;
|
||||
# print "syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); \n";
|
||||
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs);
|
||||
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs);
|
||||
}
|
||||
}
|
||||
|
||||
# close SSH sockets for master connections as applicable
|
||||
if ($sourcehost ne '') {
|
||||
if ($sourcehost ne '') {
|
||||
open FH, "$sshcmd $sourcehost -O exit 2>&1 |";
|
||||
close FH;
|
||||
}
|
||||
|
@ -123,7 +123,7 @@ exit 0;
|
|||
sub getchilddatasets {
|
||||
my ($rhost,$fs,$isroot,%snaps) = @_;
|
||||
my $mysudocmd;
|
||||
|
||||
|
||||
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
|
||||
if ($rhost ne '') { $rhost = "$sshcmd $rhost"; }
|
||||
|
||||
|
@ -147,35 +147,35 @@ sub syncdataset {
|
|||
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
# does the target filesystem exist yet?
|
||||
my $targetexists = targetexists($targethost,$targetfs,$targetisroot);
|
||||
|
||||
|
||||
# build hashes of the snaps on the source and target filesystems.
|
||||
|
||||
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
|
||||
|
||||
if ($targetexists) {
|
||||
if ($targetexists) {
|
||||
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
|
||||
my %sourcesnaps = %snaps;
|
||||
%snaps = (%sourcesnaps, %targetsnaps);
|
||||
}
|
||||
|
||||
if ($args{'dumpsnaps'}) { print "merged snapshot list of $targetfs: \n"; dumphash(\%snaps); print "\n\n\n"; }
|
||||
|
||||
|
||||
# create a new syncoid snapshot on the source filesystem.
|
||||
my $newsyncsnap;
|
||||
if (!defined ($args{'no-sync-snap'}) ) {
|
||||
if (!defined ($args{'no-sync-snap'}) ) {
|
||||
$newsyncsnap = newsyncsnap($sourcehost,$sourcefs,$sourceisroot);
|
||||
} else {
|
||||
# we don't want sync snapshots created, so use the newest snapshot we can find.
|
||||
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
|
||||
if ($newsyncsnap eq 0) {
|
||||
if ($newsyncsnap eq 0) {
|
||||
warn "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.\n";
|
||||
return 0;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# there is currently (2014-09-01) a bug in ZFS on Linux
|
||||
# that causes readonly to always show on if it's EVER
|
||||
# been turned on... even when it's off... unless and
|
||||
|
@ -184,23 +184,23 @@ sub syncdataset {
|
|||
# dyking this functionality out for the time being due to buggy mount/unmount behavior
|
||||
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
|
||||
#my $originaltargetreadonly;
|
||||
|
||||
|
||||
# sync 'em up.
|
||||
if (! $targetexists) {
|
||||
# do an initial sync from the oldest source snapshot
|
||||
# THEN do an -I to the newest
|
||||
if ($debug) {
|
||||
if ($debug) {
|
||||
if (!defined ($args{'no-stream'}) ) {
|
||||
print "DEBUG: target $targetfs does not exist. Finding oldest available snapshot on source $sourcefs ...\n";
|
||||
print "DEBUG: target $targetfs does not exist. Finding oldest available snapshot on source $sourcefs ...\n";
|
||||
} else {
|
||||
print "DEBUG: target $targetfs does not exist, and --no-stream selected. Finding newest available snapshot on source $sourcefs ...\n";
|
||||
}
|
||||
}
|
||||
my $oldestsnap = getoldestsnapshot(\%snaps);
|
||||
if (! $oldestsnap) {
|
||||
if (! $oldestsnap) {
|
||||
# getoldestsnapshot() returned false, so use new sync snapshot
|
||||
if ($debug) { print "DEBUG: getoldestsnapshot() returned false, so using $newsyncsnap.\n"; }
|
||||
$oldestsnap = $newsyncsnap;
|
||||
$oldestsnap = $newsyncsnap;
|
||||
}
|
||||
|
||||
# if --no-stream is specified, our full needs to be the newest snapshot, not the oldest.
|
||||
|
@ -213,67 +213,67 @@ sub syncdataset {
|
|||
my $disp_pvsize = readablebytes($pvsize);
|
||||
if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; }
|
||||
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
|
||||
if (!$quiet) {
|
||||
if (!$quiet) {
|
||||
if (!defined ($args{'no-stream'}) ) {
|
||||
print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
|
||||
print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
|
||||
} else {
|
||||
print "INFO: --no-stream selected; sending newest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
|
||||
print "INFO: --no-stream selected; sending newest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
|
||||
}
|
||||
}
|
||||
if ($debug) { print "DEBUG: $synccmd\n"; }
|
||||
|
||||
|
||||
# make sure target is (still) not currently in receive.
|
||||
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
|
||||
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
|
||||
return 0;
|
||||
}
|
||||
system($synccmd) == 0
|
||||
system($synccmd) == 0
|
||||
or die "CRITICAL ERROR: $synccmd failed: $?";
|
||||
|
||||
|
||||
# now do an -I to the new sync snapshot, assuming there were any snapshots
|
||||
# other than the new sync snapshot to begin with, of course - and that we
|
||||
# aren't invoked with --no-stream, in which case a full of the newest snap
|
||||
# available was all we needed to do
|
||||
if (!defined ($args{'no-stream'}) && ($oldestsnap ne $newsyncsnap) ) {
|
||||
|
||||
|
||||
# get current readonly status of target, then set it to on during sync
|
||||
# dyking this functionality out for the time being due to buggy mount/unmount behavior
|
||||
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
|
||||
# $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
|
||||
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
|
||||
|
||||
|
||||
$sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefs\@$oldestsnap $sourcefs\@$newsyncsnap";
|
||||
$pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
|
||||
$disp_pvsize = readablebytes($pvsize);
|
||||
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
|
||||
$synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
|
||||
|
||||
|
||||
# make sure target is (still) not currently in receive.
|
||||
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
|
||||
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
if (!$quiet) { print "INFO: Updating new target filesystem with incremental $sourcefs\@$oldestsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
|
||||
if ($debug) { print "DEBUG: $synccmd\n"; }
|
||||
|
||||
if ($oldestsnap ne $newsyncsnap) {
|
||||
system($synccmd) == 0
|
||||
system($synccmd) == 0
|
||||
or warn "CRITICAL ERROR: $synccmd failed: $?";
|
||||
return 0;
|
||||
} else {
|
||||
if (!$quiet) { print "INFO: no incremental sync needed; $oldestsnap is already the newest available snapshot.\n"; }
|
||||
}
|
||||
|
||||
|
||||
# restore original readonly value to target after sync complete
|
||||
# dyking this functionality out for the time being due to buggy mount/unmount behavior
|
||||
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
|
||||
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
|
||||
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
|
||||
}
|
||||
} else {
|
||||
# find most recent matching snapshot and do an -I
|
||||
# to the new snapshot
|
||||
|
||||
|
||||
# get current readonly status of target, then set it to on during sync
|
||||
# dyking this functionality out for the time being due to buggy mount/unmount behavior
|
||||
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
|
||||
|
@ -281,20 +281,20 @@ sub syncdataset {
|
|||
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
|
||||
|
||||
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
|
||||
|
||||
|
||||
my $matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, $targetsize, \%snaps);
|
||||
if (! $matchingsnap) {
|
||||
# no matching snapshot; we whined piteously already, but let's go ahead and return false
|
||||
# now in case more child datasets need replication.
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
# make sure target is (still) not currently in receive.
|
||||
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
|
||||
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
if ($matchingsnap eq $newsyncsnap) {
|
||||
# barf some text but don't touch the filesystem
|
||||
if (!$quiet) { print "INFO: no snapshots on source newer than $newsyncsnap on target. Nothing to do, not syncing.\n"; }
|
||||
|
@ -308,30 +308,30 @@ sub syncdataset {
|
|||
if ($debug) { print "$targetsudocmd $zfscmd rollback -R $targetfs\@$matchingsnap\n"; }
|
||||
system ("$targetsudocmd $zfscmd rollback -R $targetfs\@$matchingsnap");
|
||||
}
|
||||
|
||||
|
||||
my $sendcmd = "$sourcesudocmd $zfscmd send $args{'streamarg'} $sourcefs\@$matchingsnap $sourcefs\@$newsyncsnap";
|
||||
my $recvcmd = "$targetsudocmd $zfscmd receive -F $targetfs";
|
||||
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
|
||||
my $disp_pvsize = readablebytes($pvsize);
|
||||
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
|
||||
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
|
||||
|
||||
|
||||
if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
|
||||
if ($debug) { print "DEBUG: $synccmd\n"; }
|
||||
system("$synccmd") == 0
|
||||
system("$synccmd") == 0
|
||||
or die "CRITICAL ERROR: $synccmd failed: $?";
|
||||
|
||||
|
||||
# restore original readonly value to target after sync complete
|
||||
# dyking this functionality out for the time being due to buggy mount/unmount behavior
|
||||
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
|
||||
#setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
|
||||
#setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# prune obsolete sync snaps on source and target.
|
||||
pruneoldsyncsnaps($sourcehost,$sourcefs,$newsyncsnap,$sourceisroot,keys %{ $snaps{'source'}});
|
||||
pruneoldsyncsnaps($targethost,$targetfs,$newsyncsnap,$targetisroot,keys %{ $snaps{'target'}});
|
||||
|
||||
|
||||
} # end syncdataset()
|
||||
|
||||
|
||||
|
@ -366,7 +366,7 @@ sub getargs {
|
|||
# if this CLI arg takes a user-specified value and
|
||||
# we don't already have it, then the user must have
|
||||
# specified with a space, so pull in the next value
|
||||
# from the array as this value rather than as the
|
||||
# from the array as this value rather than as the
|
||||
# next argument.
|
||||
if ($argvalue eq '') { $argvalue = shift(@args); }
|
||||
$args{$arg} = $argvalue;
|
||||
|
@ -381,7 +381,7 @@ sub getargs {
|
|||
# if this CLI arg takes a user-specified value and
|
||||
# we don't already have it, then the user must have
|
||||
# specified with a space, so pull in the next value
|
||||
# from the array as this value rather than as the
|
||||
# from the array as this value rather than as the
|
||||
# next argument.
|
||||
if ($argvalue eq '') { $argvalue = shift(@args); }
|
||||
$args{$arg} = $argvalue;
|
||||
|
@ -450,14 +450,14 @@ sub checkcommands {
|
|||
my $targetssh;
|
||||
|
||||
# if --nocommandchecks then assume everything's available and return
|
||||
if ($args{'nocommandchecks'}) {
|
||||
if ($args{'nocommandchecks'}) {
|
||||
if ($debug) { print "DEBUG: not checking for command availability due to --nocommandchecks switch.\n"; }
|
||||
$avail{'compress'} = 1;
|
||||
$avail{'localpv'} = 1;
|
||||
$avail{'localmbuffer'} = 1;
|
||||
$avail{'sourcembuffer'} = 1;
|
||||
$avail{'targetmbuffer'} = 1;
|
||||
return %avail;
|
||||
return %avail;
|
||||
}
|
||||
|
||||
if (!defined $sourcehost) { $sourcehost = ''; }
|
||||
|
@ -466,7 +466,7 @@ sub checkcommands {
|
|||
if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; } else { $sourcessh = ''; }
|
||||
if ($targethost ne '') { $targetssh = "$sshcmd $targethost"; } else { $targetssh = ''; }
|
||||
|
||||
# if raw compress command is null, we must have specified no compression. otherwise,
|
||||
# if raw compress command is null, we must have specified no compression. otherwise,
|
||||
# make sure that compression is available everywhere we need it
|
||||
if ($args{'rawcompresscmd'} eq '') {
|
||||
$avail{'sourcecompress'} = 0;
|
||||
|
@ -489,14 +489,14 @@ sub checkcommands {
|
|||
}
|
||||
|
||||
my ($s,$t);
|
||||
if ($sourcehost eq '') {
|
||||
if ($sourcehost eq '') {
|
||||
$s = '[local machine]'
|
||||
} else {
|
||||
$s = $sourcehost;
|
||||
$s =~ s/^\S*\@//;
|
||||
$s = "ssh:$s";
|
||||
}
|
||||
if ($targethost eq '') {
|
||||
if ($targethost eq '') {
|
||||
$t = '[local machine]'
|
||||
} else {
|
||||
$t = $targethost;
|
||||
|
@ -510,15 +510,15 @@ sub checkcommands {
|
|||
if (!defined $avail{'targetmbuffer'}) { $avail{'targetmbuffer'} = ''; }
|
||||
|
||||
|
||||
if ($avail{'sourcecompress'} eq '') {
|
||||
if ($avail{'sourcecompress'} eq '') {
|
||||
if ($args{'rawcompresscmd'} ne '') {
|
||||
print "WARN: $args{'compresscmd'} not available on source $s- sync will continue without compression.\n";
|
||||
print "WARN: $args{'compresscmd'} not available on source $s- sync will continue without compression.\n";
|
||||
}
|
||||
$avail{'compress'} = 0;
|
||||
}
|
||||
if ($avail{'targetcompress'} eq '') {
|
||||
if ($args{'rawcompresscmd'} ne '') {
|
||||
print "WARN: $args{'compresscmd'} not available on target $t - sync will continue without compression.\n";
|
||||
print "WARN: $args{'compresscmd'} not available on target $t - sync will continue without compression.\n";
|
||||
}
|
||||
$avail{'compress'} = 0;
|
||||
}
|
||||
|
@ -529,9 +529,9 @@ sub checkcommands {
|
|||
}
|
||||
|
||||
# corner case - if source AND target are BOTH remote, we have to check for local compress too
|
||||
if ($sourcehost ne '' && $targethost ne '' && $avail{'localcompress'} eq '') {
|
||||
if ($sourcehost ne '' && $targethost ne '' && $avail{'localcompress'} eq '') {
|
||||
if ($args{'rawcompresscmd'} ne '') {
|
||||
print "WARN: $args{'compresscmd'} not available on local machine - sync will continue without compression.\n";
|
||||
print "WARN: $args{'compresscmd'} not available on local machine - sync will continue without compression.\n";
|
||||
}
|
||||
$avail{'compress'} = 0;
|
||||
}
|
||||
|
@ -572,7 +572,7 @@ sub checkcommands {
|
|||
} else {
|
||||
$avail{'localpv'} = 1;
|
||||
}
|
||||
|
||||
|
||||
return %avail;
|
||||
}
|
||||
|
||||
|
@ -690,7 +690,7 @@ sub buildsynccmd {
|
|||
if (defined $args{'source-bwlimit'}) {
|
||||
$bwlimit = $args{'source-bwlimit'};
|
||||
} elsif (defined $args{'target-bwlimit'}) {
|
||||
$bwlimit = $args{'target-bwlimit'};
|
||||
$bwlimit = $args{'target-bwlimit'};
|
||||
}
|
||||
|
||||
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $bwlimit $mbufferoptions |"; }
|
||||
|
@ -748,7 +748,7 @@ sub pruneoldsyncsnaps {
|
|||
my @prunesnaps;
|
||||
|
||||
# only prune snaps beginning with syncoid and our own hostname
|
||||
foreach my $snap(@snaps) {
|
||||
foreach my $snap(@snaps) {
|
||||
if ($snap =~ /^syncoid_$hostid/) {
|
||||
# no matter what, we categorically refuse to
|
||||
# prune the new sync snap we created for this run
|
||||
|
@ -771,7 +771,7 @@ sub pruneoldsyncsnaps {
|
|||
if ($rhost ne '') { $prunecmd = '"' . $prunecmd . '"'; }
|
||||
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
|
||||
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
|
||||
system("$rhost $prunecmd") == 0
|
||||
system("$rhost $prunecmd") == 0
|
||||
or warn "CRITICAL ERROR: $rhost $prunecmd failed: $?";
|
||||
$prunecmd = '';
|
||||
$counter = 0;
|
||||
|
@ -779,13 +779,13 @@ sub pruneoldsyncsnaps {
|
|||
}
|
||||
# if we still have some prune commands stacked up after finishing
|
||||
# the loop, commit 'em now
|
||||
if ($counter) {
|
||||
$prunecmd =~ s/\; $//;
|
||||
if ($counter) {
|
||||
$prunecmd =~ s/\; $//;
|
||||
if ($rhost ne '') { $prunecmd = '"' . $prunecmd . '"'; }
|
||||
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
|
||||
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
|
||||
system("$rhost $prunecmd") == 0
|
||||
or warn "WARNING: $rhost $prunecmd failed: $?";
|
||||
system("$rhost $prunecmd") == 0
|
||||
or warn "WARNING: $rhost $prunecmd failed: $?";
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
@ -807,7 +807,7 @@ sub getmatchingsnapshot {
|
|||
print " Replication to target would require destroying existing\n";
|
||||
print " target. Cowardly refusing to destroy your existing target.\n\n";
|
||||
|
||||
# experience tells me we need a mollyguard for people who try to
|
||||
# experience tells me we need a mollyguard for people who try to
|
||||
# zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ...
|
||||
|
||||
if ( $targetsize < (64*1024*1024) ) {
|
||||
|
@ -828,7 +828,7 @@ sub newsyncsnap {
|
|||
my %date = getdate();
|
||||
my $snapname = "syncoid\_$hostid\_$date{'stamp'}";
|
||||
my $snapcmd = "$rhost $mysudocmd $zfscmd snapshot $fs\@$snapname\n";
|
||||
system($snapcmd) == 0
|
||||
system($snapcmd) == 0
|
||||
or die "CRITICAL ERROR: $snapcmd failed: $?";
|
||||
return $snapname;
|
||||
}
|
||||
|
@ -926,7 +926,7 @@ sub getsnaps() {
|
|||
}
|
||||
|
||||
|
||||
sub getsendsize {
|
||||
sub getsendsize {
|
||||
my ($sourcehost,$snap1,$snap2,$isroot) = @_;
|
||||
|
||||
my $mysudocmd;
|
||||
|
@ -952,9 +952,9 @@ sub getsendsize {
|
|||
close FH;
|
||||
my $exit = $?;
|
||||
|
||||
# process sendsize: last line of multi-line output is
|
||||
# size of proposed xfer in bytes, but we need to remove
|
||||
# human-readable crap from it
|
||||
# process sendsize: last line of multi-line output is
|
||||
# size of proposed xfer in bytes, but we need to remove
|
||||
# human-readable crap from it
|
||||
my $sendsize = pop(@rawsize);
|
||||
$sendsize =~ s/^size\s*//;
|
||||
chomp $sendsize;
|
||||
|
@ -964,8 +964,8 @@ sub getsendsize {
|
|||
if ($debug) { print "DEBUG: sendsize = $sendsize\n"; }
|
||||
if ($sendsize eq '' || $exit != 0) {
|
||||
$sendsize = '0';
|
||||
} elsif ($sendsize < 4096) {
|
||||
$sendsize = 4096;
|
||||
} elsif ($sendsize < 4096) {
|
||||
$sendsize = 4096;
|
||||
}
|
||||
return $sendsize;
|
||||
}
|
||||
|
@ -984,5 +984,3 @@ sub getdate {
|
|||
$date{'stamp'} = "$date{'year'}-$date{'mon'}-$date{'mday'}:$date{'hour'}:$date{'min'}:$date{'sec'}";
|
||||
return %date;
|
||||
}
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue