Deduplicate an orderly archiveSource:
Deduplicate an orderly archive. Deduplicating an orderly archive will replace all files that have the same content with "hard links". This requires hard link support in the underlying operating system, which is available on all unix-like systems (e.g. MacOS and Linux) and on Windows since Vista. However, on windows systems this might require somewhat elevated privileges. If you use this feature, it is very important that you treat your orderly archive as read-only (though you should be anyway) as changing one copy of a linked file changes all the other instances of it - the files are literally the same file.
The path to an orderly root directory, or
NULL(the default) to search for one from the current working directory if
Logical, indicating if the configuration should be searched for. If
configis not given, then orderly looks in the working directory and up through its parents until it finds an
Logical, indicating if the deduplication should be planned but not run
Logical, indicating if the status should not be printed
Invisibly, information about the duplication status of the archive before deduplication was run.
This function will alter your orderly archive. Ordinarily this is not something that should be done, so we try to be careful. In order for this to work, it is very important to treat your orderly archive as read-only generally. If your canonical orderly archive is behind OrderlyWeb this will almost certainly be the case already.
With "hard linking", two files with the same content can be updated so that both files point at the same physical bit of data. This is great, as if the file is large, then only one copy needs to be stored. However, this means that if a change is made to one copy of the file, it is immediately reflected in the other, but there is nothing to indicate that the files are linked!
This approach is worth exploring if you have large files that are
outputs of one report and inputs to another, or large inputs
repeatedly used in different reports, or outputs that end up being
the same in multiple reports. If you run the deduplication with
dry_run = TRUE, an indication of the savings will be
path <- orderly::orderly_example("demo") id1 <- orderly::orderly_run("minimal", root = path) #> [ name ] minimal #> [ id ] 20220118-093842-6cb7bd1a #> [ start ] 2022-01-18 09:38:42 #> [ data ] source => dat: 20 x 2 #> #> > png("mygraph.png") #> #> > par(mar = c(15, 4, 0.5, 0.5)) #> #> > barplot(setNames(dat$number, dat$name), las = 2) #> #> > dev.off() #> agg_png #> 2 #> [ end ] 2022-01-18 09:38:42 #> [ elapsed ] Ran report in 0.03870296 secs #> [ artefact ] mygraph.png: f8f77b71be1e3a2c8c9a27396905d549 id2 <- orderly::orderly_run("minimal", root = path) #> [ name ] minimal #> [ id ] 20220118-093842-8591701e #> [ start ] 2022-01-18 09:38:42 #> [ data ] source => dat: 20 x 2 #> #> > png("mygraph.png") #> #> > par(mar = c(15, 4, 0.5, 0.5)) #> #> > barplot(setNames(dat$number, dat$name), las = 2) #> #> > dev.off() #> agg_png #> 2 #> [ end ] 2022-01-18 09:38:42 #> [ elapsed ] Ran report in 0.03680801 secs #> [ artefact ] mygraph.png: f8f77b71be1e3a2c8c9a27396905d549 orderly_commit(id1, root = path) #> [ commit ] minimal/20220118-093842-6cb7bd1a #> [ copy ] #> [ import ] minimal:20220118-093842-6cb7bd1a #> [ success ] :) #>  "/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/RtmpMWUKEw/file17e36141311/archive/minimal/20220118-093842-6cb7bd1a" orderly_commit(id2, root = path) #> [ commit ] minimal/20220118-093842-8591701e #> [ copy ] #> [ import ] minimal:20220118-093842-8591701e #> [ success ] :) #>  "/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/RtmpMWUKEw/file17e36141311/archive/minimal/20220118-093842-8591701e" tryCatch( orderly::orderly_deduplicate(path, dry_run = TRUE), error = function(e) NULL) #> Deduplication information for #> /private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/RtmpMWUKEw/file17e36141311/archive #> - 6 tracked files #> - 76.64 kB total size #> - 3 duplicate files #> - 38.32 kB duplicated size #> - 0 deduplicated files #> - 0 B deduplicated size #> - 0 untracked files #> - 0 B untracked size