create date folder version backups with hard links and a combined view
I am using the ideas laid out in this article to create incremental versioned backups of my data. I basically sync the data to a current folder in my backup destination and then create a date folder with hard links to the current folder. I end up with this:
$ ls
...
2019-01-01_10-00-01
2019-01-02_10-00-01
...
2019-02-15_10-00-01
...
current
It works great. If I ever need to do a full restore from a specific date, I can just restore everything from that date's folder.
But if you're looking for previous versions of a specific file, you have to go through each of the date folder's to find what you want. I want to create another folder that has a running total of all files, each time they were changed. A combined view if you will.
I came up with this, and it works, but I am wondering if there is a more elegant, standard way to do this.
#!/bin/bash
NOW=$(/bin/date +%Y-%m-%d_%H-%M-%S)
# the data that needs to be backed up
SOURCES=("/path/to/source 1" "/path/to/source 2")
# where it is going
DESTINATION="/path/to/backup"
# make sure the destination exists
mkdir -p "$DESTINATION"
# make sure there is a place to put the current data
mkdir -p "$DESTINATION/current"
# make sure there is a place to put the "combined" data
mkdir -p "$DESTINATION/combined"
# sync the data
rsync -v -a --delete "${SOURCES[@]}" "$DESTINATION/current"
# check if files were backed up
# any file with only one link is either new, and needs to have a hard link version
# or it wasn't fully backed up previously and needs a hard link version
if [[ $(find "$DESTINATION/current" -type f -links 1 | wc -l) -ne 0 ]] ; then
# make a date folder backup using hard links
cp -al "$DESTINATION/current" "$DESTINATION/$NOW"
# make a combined view
# - find all files with 2 links
# - one link is to the file in the $DESTINATION/current
# - the other link is to the file in $DESTINATION/$NOW
# - there should never be any files with only 1 hard link since the previous command
# is sure to have created a second link
# - any files with more than 2 links were, hopefully, already covered during a previous iteration
cd "$DESTINATION/current" && find * -type f -links 2 -print0 | while IFS= read -r -d $'' filePath
do
fileName="$(basename "$filePath")"
fileFolder="$(dirname "$filePath")"
# where the file will live in the combined folder
# need to mirror the folder structure
destinationFolder="$DESTINATION/combined/$fileFolder"
mkdir -p "$destinationFolder"
# make a hard link to it
cp -al "$filePath" "$destinationFolder/$fileName.$NOW"
done
fi
The code does work. After a few iterations, this is what it creates:
Files in the current folder (this is a "live" copy of the source data):
backup/current/source 1/001
backup/current/source 1/002
backup/current/source 1/003
backup/current/source 1/file 100
backup/current/source 1/folder/004
backup/current/source 2/006
Files in the date specific folders (note files from the first backup have files that aren't in the second because they were deleted):
backup/2019-01-15_23-08-02/source 1/001
backup/2019-01-15_23-08-02/source 1/002
backup/2019-01-15_23-08-02/source 1/003
backup/2019-01-15_23-08-02/source 1/file 100
backup/2019-01-15_23-08-02/source 1/folder/004
backup/2019-01-15_23-08-02/source 1/folder/005
backup/2019-01-15_23-08-02/source 2/006
backup/2019-01-15_23-08-02/source 2/007
backup/2019-01-15_23-09-00/source 1/001
backup/2019-01-15_23-09-00/source 1/002
backup/2019-01-15_23-09-00/source 1/003
backup/2019-01-15_23-09-00/source 1/file 100
backup/2019-01-15_23-09-00/source 1/folder/004
backup/2019-01-15_23-09-00/source 2/006
And these are the files in the combined view:
backup/combined/source 1/001.2019-01-15_23-08-02
backup/combined/source 1/002.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-09-00
backup/combined/source 1/file 100.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-09-00
backup/combined/source 1/folder/005.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-09-00
backup/combined/source 2/007.2019-01-15_23-08-02
This way, if I need to find a previous version of source 1/folder/004
, I just need to go to it's matching folder in backup/combined/
(backup/combined/source 1/folder
) and all the 004
files are there, with a date/time stamp appended.
Is there a better, more elegant way to do this?
linux backup rsync cp hardlink
|
show 1 more comment
I am using the ideas laid out in this article to create incremental versioned backups of my data. I basically sync the data to a current folder in my backup destination and then create a date folder with hard links to the current folder. I end up with this:
$ ls
...
2019-01-01_10-00-01
2019-01-02_10-00-01
...
2019-02-15_10-00-01
...
current
It works great. If I ever need to do a full restore from a specific date, I can just restore everything from that date's folder.
But if you're looking for previous versions of a specific file, you have to go through each of the date folder's to find what you want. I want to create another folder that has a running total of all files, each time they were changed. A combined view if you will.
I came up with this, and it works, but I am wondering if there is a more elegant, standard way to do this.
#!/bin/bash
NOW=$(/bin/date +%Y-%m-%d_%H-%M-%S)
# the data that needs to be backed up
SOURCES=("/path/to/source 1" "/path/to/source 2")
# where it is going
DESTINATION="/path/to/backup"
# make sure the destination exists
mkdir -p "$DESTINATION"
# make sure there is a place to put the current data
mkdir -p "$DESTINATION/current"
# make sure there is a place to put the "combined" data
mkdir -p "$DESTINATION/combined"
# sync the data
rsync -v -a --delete "${SOURCES[@]}" "$DESTINATION/current"
# check if files were backed up
# any file with only one link is either new, and needs to have a hard link version
# or it wasn't fully backed up previously and needs a hard link version
if [[ $(find "$DESTINATION/current" -type f -links 1 | wc -l) -ne 0 ]] ; then
# make a date folder backup using hard links
cp -al "$DESTINATION/current" "$DESTINATION/$NOW"
# make a combined view
# - find all files with 2 links
# - one link is to the file in the $DESTINATION/current
# - the other link is to the file in $DESTINATION/$NOW
# - there should never be any files with only 1 hard link since the previous command
# is sure to have created a second link
# - any files with more than 2 links were, hopefully, already covered during a previous iteration
cd "$DESTINATION/current" && find * -type f -links 2 -print0 | while IFS= read -r -d $'' filePath
do
fileName="$(basename "$filePath")"
fileFolder="$(dirname "$filePath")"
# where the file will live in the combined folder
# need to mirror the folder structure
destinationFolder="$DESTINATION/combined/$fileFolder"
mkdir -p "$destinationFolder"
# make a hard link to it
cp -al "$filePath" "$destinationFolder/$fileName.$NOW"
done
fi
The code does work. After a few iterations, this is what it creates:
Files in the current folder (this is a "live" copy of the source data):
backup/current/source 1/001
backup/current/source 1/002
backup/current/source 1/003
backup/current/source 1/file 100
backup/current/source 1/folder/004
backup/current/source 2/006
Files in the date specific folders (note files from the first backup have files that aren't in the second because they were deleted):
backup/2019-01-15_23-08-02/source 1/001
backup/2019-01-15_23-08-02/source 1/002
backup/2019-01-15_23-08-02/source 1/003
backup/2019-01-15_23-08-02/source 1/file 100
backup/2019-01-15_23-08-02/source 1/folder/004
backup/2019-01-15_23-08-02/source 1/folder/005
backup/2019-01-15_23-08-02/source 2/006
backup/2019-01-15_23-08-02/source 2/007
backup/2019-01-15_23-09-00/source 1/001
backup/2019-01-15_23-09-00/source 1/002
backup/2019-01-15_23-09-00/source 1/003
backup/2019-01-15_23-09-00/source 1/file 100
backup/2019-01-15_23-09-00/source 1/folder/004
backup/2019-01-15_23-09-00/source 2/006
And these are the files in the combined view:
backup/combined/source 1/001.2019-01-15_23-08-02
backup/combined/source 1/002.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-09-00
backup/combined/source 1/file 100.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-09-00
backup/combined/source 1/folder/005.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-09-00
backup/combined/source 2/007.2019-01-15_23-08-02
This way, if I need to find a previous version of source 1/folder/004
, I just need to go to it's matching folder in backup/combined/
(backup/combined/source 1/folder
) and all the 004
files are there, with a date/time stamp appended.
Is there a better, more elegant way to do this?
linux backup rsync cp hardlink
Might just want to use backup software
– Xen2050
Jan 16 at 6:24
None do what I want. And most are unnecessarily bloated. Why use them when a small script works. The code I have works and does what I want but I thought there might be a more apt way to do it -- like some Linux command that I might not know about.
– IMTheNachoMan
Jan 17 at 1:03
Even duplicity?
– Xen2050
Jan 17 at 1:42
No. In fact, I think each archive of duplicity only has the changed files so there is no way to view a point in time archive of all the files and you can't browse all files in a combined folder structure to quickly find a previous version of a specific file.
– IMTheNachoMan
Jan 17 at 2:26
Actually, duplicity has a--time
option to "Specify the time from which to restore or list files." I think files are normally compressed, so browsing them immediately may not be possible, but they would take up less space (and only the changes from large files are saved, instead of a whole file copy), and extracting only a few compressed & encrypted files from a specific date should be really easy
– Xen2050
Jan 17 at 2:35
|
show 1 more comment
I am using the ideas laid out in this article to create incremental versioned backups of my data. I basically sync the data to a current folder in my backup destination and then create a date folder with hard links to the current folder. I end up with this:
$ ls
...
2019-01-01_10-00-01
2019-01-02_10-00-01
...
2019-02-15_10-00-01
...
current
It works great. If I ever need to do a full restore from a specific date, I can just restore everything from that date's folder.
But if you're looking for previous versions of a specific file, you have to go through each of the date folder's to find what you want. I want to create another folder that has a running total of all files, each time they were changed. A combined view if you will.
I came up with this, and it works, but I am wondering if there is a more elegant, standard way to do this.
#!/bin/bash
NOW=$(/bin/date +%Y-%m-%d_%H-%M-%S)
# the data that needs to be backed up
SOURCES=("/path/to/source 1" "/path/to/source 2")
# where it is going
DESTINATION="/path/to/backup"
# make sure the destination exists
mkdir -p "$DESTINATION"
# make sure there is a place to put the current data
mkdir -p "$DESTINATION/current"
# make sure there is a place to put the "combined" data
mkdir -p "$DESTINATION/combined"
# sync the data
rsync -v -a --delete "${SOURCES[@]}" "$DESTINATION/current"
# check if files were backed up
# any file with only one link is either new, and needs to have a hard link version
# or it wasn't fully backed up previously and needs a hard link version
if [[ $(find "$DESTINATION/current" -type f -links 1 | wc -l) -ne 0 ]] ; then
# make a date folder backup using hard links
cp -al "$DESTINATION/current" "$DESTINATION/$NOW"
# make a combined view
# - find all files with 2 links
# - one link is to the file in the $DESTINATION/current
# - the other link is to the file in $DESTINATION/$NOW
# - there should never be any files with only 1 hard link since the previous command
# is sure to have created a second link
# - any files with more than 2 links were, hopefully, already covered during a previous iteration
cd "$DESTINATION/current" && find * -type f -links 2 -print0 | while IFS= read -r -d $'' filePath
do
fileName="$(basename "$filePath")"
fileFolder="$(dirname "$filePath")"
# where the file will live in the combined folder
# need to mirror the folder structure
destinationFolder="$DESTINATION/combined/$fileFolder"
mkdir -p "$destinationFolder"
# make a hard link to it
cp -al "$filePath" "$destinationFolder/$fileName.$NOW"
done
fi
The code does work. After a few iterations, this is what it creates:
Files in the current folder (this is a "live" copy of the source data):
backup/current/source 1/001
backup/current/source 1/002
backup/current/source 1/003
backup/current/source 1/file 100
backup/current/source 1/folder/004
backup/current/source 2/006
Files in the date specific folders (note files from the first backup have files that aren't in the second because they were deleted):
backup/2019-01-15_23-08-02/source 1/001
backup/2019-01-15_23-08-02/source 1/002
backup/2019-01-15_23-08-02/source 1/003
backup/2019-01-15_23-08-02/source 1/file 100
backup/2019-01-15_23-08-02/source 1/folder/004
backup/2019-01-15_23-08-02/source 1/folder/005
backup/2019-01-15_23-08-02/source 2/006
backup/2019-01-15_23-08-02/source 2/007
backup/2019-01-15_23-09-00/source 1/001
backup/2019-01-15_23-09-00/source 1/002
backup/2019-01-15_23-09-00/source 1/003
backup/2019-01-15_23-09-00/source 1/file 100
backup/2019-01-15_23-09-00/source 1/folder/004
backup/2019-01-15_23-09-00/source 2/006
And these are the files in the combined view:
backup/combined/source 1/001.2019-01-15_23-08-02
backup/combined/source 1/002.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-09-00
backup/combined/source 1/file 100.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-09-00
backup/combined/source 1/folder/005.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-09-00
backup/combined/source 2/007.2019-01-15_23-08-02
This way, if I need to find a previous version of source 1/folder/004
, I just need to go to it's matching folder in backup/combined/
(backup/combined/source 1/folder
) and all the 004
files are there, with a date/time stamp appended.
Is there a better, more elegant way to do this?
linux backup rsync cp hardlink
I am using the ideas laid out in this article to create incremental versioned backups of my data. I basically sync the data to a current folder in my backup destination and then create a date folder with hard links to the current folder. I end up with this:
$ ls
...
2019-01-01_10-00-01
2019-01-02_10-00-01
...
2019-02-15_10-00-01
...
current
It works great. If I ever need to do a full restore from a specific date, I can just restore everything from that date's folder.
But if you're looking for previous versions of a specific file, you have to go through each of the date folder's to find what you want. I want to create another folder that has a running total of all files, each time they were changed. A combined view if you will.
I came up with this, and it works, but I am wondering if there is a more elegant, standard way to do this.
#!/bin/bash
NOW=$(/bin/date +%Y-%m-%d_%H-%M-%S)
# the data that needs to be backed up
SOURCES=("/path/to/source 1" "/path/to/source 2")
# where it is going
DESTINATION="/path/to/backup"
# make sure the destination exists
mkdir -p "$DESTINATION"
# make sure there is a place to put the current data
mkdir -p "$DESTINATION/current"
# make sure there is a place to put the "combined" data
mkdir -p "$DESTINATION/combined"
# sync the data
rsync -v -a --delete "${SOURCES[@]}" "$DESTINATION/current"
# check if files were backed up
# any file with only one link is either new, and needs to have a hard link version
# or it wasn't fully backed up previously and needs a hard link version
if [[ $(find "$DESTINATION/current" -type f -links 1 | wc -l) -ne 0 ]] ; then
# make a date folder backup using hard links
cp -al "$DESTINATION/current" "$DESTINATION/$NOW"
# make a combined view
# - find all files with 2 links
# - one link is to the file in the $DESTINATION/current
# - the other link is to the file in $DESTINATION/$NOW
# - there should never be any files with only 1 hard link since the previous command
# is sure to have created a second link
# - any files with more than 2 links were, hopefully, already covered during a previous iteration
cd "$DESTINATION/current" && find * -type f -links 2 -print0 | while IFS= read -r -d $'' filePath
do
fileName="$(basename "$filePath")"
fileFolder="$(dirname "$filePath")"
# where the file will live in the combined folder
# need to mirror the folder structure
destinationFolder="$DESTINATION/combined/$fileFolder"
mkdir -p "$destinationFolder"
# make a hard link to it
cp -al "$filePath" "$destinationFolder/$fileName.$NOW"
done
fi
The code does work. After a few iterations, this is what it creates:
Files in the current folder (this is a "live" copy of the source data):
backup/current/source 1/001
backup/current/source 1/002
backup/current/source 1/003
backup/current/source 1/file 100
backup/current/source 1/folder/004
backup/current/source 2/006
Files in the date specific folders (note files from the first backup have files that aren't in the second because they were deleted):
backup/2019-01-15_23-08-02/source 1/001
backup/2019-01-15_23-08-02/source 1/002
backup/2019-01-15_23-08-02/source 1/003
backup/2019-01-15_23-08-02/source 1/file 100
backup/2019-01-15_23-08-02/source 1/folder/004
backup/2019-01-15_23-08-02/source 1/folder/005
backup/2019-01-15_23-08-02/source 2/006
backup/2019-01-15_23-08-02/source 2/007
backup/2019-01-15_23-09-00/source 1/001
backup/2019-01-15_23-09-00/source 1/002
backup/2019-01-15_23-09-00/source 1/003
backup/2019-01-15_23-09-00/source 1/file 100
backup/2019-01-15_23-09-00/source 1/folder/004
backup/2019-01-15_23-09-00/source 2/006
And these are the files in the combined view:
backup/combined/source 1/001.2019-01-15_23-08-02
backup/combined/source 1/002.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-08-02
backup/combined/source 1/003.2019-01-15_23-09-00
backup/combined/source 1/file 100.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-08-02
backup/combined/source 1/folder/004.2019-01-15_23-09-00
backup/combined/source 1/folder/005.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-08-02
backup/combined/source 2/006.2019-01-15_23-09-00
backup/combined/source 2/007.2019-01-15_23-08-02
This way, if I need to find a previous version of source 1/folder/004
, I just need to go to it's matching folder in backup/combined/
(backup/combined/source 1/folder
) and all the 004
files are there, with a date/time stamp appended.
Is there a better, more elegant way to do this?
linux backup rsync cp hardlink
linux backup rsync cp hardlink
asked Jan 16 at 4:31
IMTheNachoManIMTheNachoMan
19212
19212
Might just want to use backup software
– Xen2050
Jan 16 at 6:24
None do what I want. And most are unnecessarily bloated. Why use them when a small script works. The code I have works and does what I want but I thought there might be a more apt way to do it -- like some Linux command that I might not know about.
– IMTheNachoMan
Jan 17 at 1:03
Even duplicity?
– Xen2050
Jan 17 at 1:42
No. In fact, I think each archive of duplicity only has the changed files so there is no way to view a point in time archive of all the files and you can't browse all files in a combined folder structure to quickly find a previous version of a specific file.
– IMTheNachoMan
Jan 17 at 2:26
Actually, duplicity has a--time
option to "Specify the time from which to restore or list files." I think files are normally compressed, so browsing them immediately may not be possible, but they would take up less space (and only the changes from large files are saved, instead of a whole file copy), and extracting only a few compressed & encrypted files from a specific date should be really easy
– Xen2050
Jan 17 at 2:35
|
show 1 more comment
Might just want to use backup software
– Xen2050
Jan 16 at 6:24
None do what I want. And most are unnecessarily bloated. Why use them when a small script works. The code I have works and does what I want but I thought there might be a more apt way to do it -- like some Linux command that I might not know about.
– IMTheNachoMan
Jan 17 at 1:03
Even duplicity?
– Xen2050
Jan 17 at 1:42
No. In fact, I think each archive of duplicity only has the changed files so there is no way to view a point in time archive of all the files and you can't browse all files in a combined folder structure to quickly find a previous version of a specific file.
– IMTheNachoMan
Jan 17 at 2:26
Actually, duplicity has a--time
option to "Specify the time from which to restore or list files." I think files are normally compressed, so browsing them immediately may not be possible, but they would take up less space (and only the changes from large files are saved, instead of a whole file copy), and extracting only a few compressed & encrypted files from a specific date should be really easy
– Xen2050
Jan 17 at 2:35
Might just want to use backup software
– Xen2050
Jan 16 at 6:24
Might just want to use backup software
– Xen2050
Jan 16 at 6:24
None do what I want. And most are unnecessarily bloated. Why use them when a small script works. The code I have works and does what I want but I thought there might be a more apt way to do it -- like some Linux command that I might not know about.
– IMTheNachoMan
Jan 17 at 1:03
None do what I want. And most are unnecessarily bloated. Why use them when a small script works. The code I have works and does what I want but I thought there might be a more apt way to do it -- like some Linux command that I might not know about.
– IMTheNachoMan
Jan 17 at 1:03
Even duplicity?
– Xen2050
Jan 17 at 1:42
Even duplicity?
– Xen2050
Jan 17 at 1:42
No. In fact, I think each archive of duplicity only has the changed files so there is no way to view a point in time archive of all the files and you can't browse all files in a combined folder structure to quickly find a previous version of a specific file.
– IMTheNachoMan
Jan 17 at 2:26
No. In fact, I think each archive of duplicity only has the changed files so there is no way to view a point in time archive of all the files and you can't browse all files in a combined folder structure to quickly find a previous version of a specific file.
– IMTheNachoMan
Jan 17 at 2:26
Actually, duplicity has a
--time
option to "Specify the time from which to restore or list files." I think files are normally compressed, so browsing them immediately may not be possible, but they would take up less space (and only the changes from large files are saved, instead of a whole file copy), and extracting only a few compressed & encrypted files from a specific date should be really easy– Xen2050
Jan 17 at 2:35
Actually, duplicity has a
--time
option to "Specify the time from which to restore or list files." I think files are normally compressed, so browsing them immediately may not be possible, but they would take up less space (and only the changes from large files are saved, instead of a whole file copy), and extracting only a few compressed & encrypted files from a specific date should be really easy– Xen2050
Jan 17 at 2:35
|
show 1 more comment
0
active
oldest
votes
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1394792%2fcreate-date-folder-version-backups-with-hard-links-and-a-combined-view%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1394792%2fcreate-date-folder-version-backups-with-hard-links-and-a-combined-view%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Might just want to use backup software
– Xen2050
Jan 16 at 6:24
None do what I want. And most are unnecessarily bloated. Why use them when a small script works. The code I have works and does what I want but I thought there might be a more apt way to do it -- like some Linux command that I might not know about.
– IMTheNachoMan
Jan 17 at 1:03
Even duplicity?
– Xen2050
Jan 17 at 1:42
No. In fact, I think each archive of duplicity only has the changed files so there is no way to view a point in time archive of all the files and you can't browse all files in a combined folder structure to quickly find a previous version of a specific file.
– IMTheNachoMan
Jan 17 at 2:26
Actually, duplicity has a
--time
option to "Specify the time from which to restore or list files." I think files are normally compressed, so browsing them immediately may not be possible, but they would take up less space (and only the changes from large files are saved, instead of a whole file copy), and extracting only a few compressed & encrypted files from a specific date should be really easy– Xen2050
Jan 17 at 2:35