I’m trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync periodically, but I’m curious what other solutions may exist.

I’m interested in both command-line, and GUI solutions.

  • @inex@feddit.de
    link
    fedilink
    162 years ago

    Timeshift is a great tool for creating incremental backups. Basically it’s a frontend for rsync and it works great. If needed you can also use it in CLI

  • mariom
    link
    fedilink
    English
    82 years ago

    Is it just me or the backup topic is recurring each few days on !linux@lemmy.ml and !selfhosted@lemmy.world?

    To be on topic as well - I use restic+autorestic combo. Pretty simple, I made repo with small script to generate config for different machines and that’s it. Storing between machines and b2.

  • @PlexSheep@feddit.de
    link
    fedilink
    72 years ago

    I have a bash script that backs all my stuff up to my Homeserver with Borg. My servers have cronjobs that run similar scripts.

  • @elscallr@lemmy.world
    link
    fedilink
    52 years ago

    Exactly like you think. Cronjob runs a periodic rsync of a handful of directories under /home. My OS is on a different drive that doesn’t get backed up. My configs are in an ansible repository hosted on my home server and backed up the same way.

  • @okda@lemmy.ml
    link
    fedilink
    42 years ago

    Check out Pika backup. It’s a beautiful frontend for Borg. And Borg is the shit.

  • @rodbiren@midwest.social
    link
    fedilink
    32 years ago

    Use synching on several devices to replicate data I want to keep backups of. Family photos, journals, important docs, etc. Works perfect and I run a relay node to give back to the community given I am on a unlimited data connection.

    • @stewsters@lemmy.world
      link
      fedilink
      12 years ago

      I use syncthing for my documents as well. My source code is in GitHub if it’s important, and I can reinstall everything else if I need.

  • @to_urcite_ty_kokos@lemmy.world
    link
    fedilink
    English
    32 years ago

    Git projects and system configs are on GitHub (see etckeeper), the reset is synced to my self-hosted Nextcloud instance using their desktop client. There I have periodic backup using Borg for both the files and Nextcloud database.

  • @HughJanus@lemmy.ml
    link
    fedilink
    32 years ago

    I don’t, really. I don’t have much data that is irreplaceable.

    The ones that are get backed up manually to Proton Drive and my NAS (manually via SMB).

  • @akash_rawal@lemmy.world
    link
    fedilink
    22 years ago

    I use rsync+btrfs snapshot solution.

    1. Use rsync to incrementally collect all data into a btrfs subvolume
    2. Deduplicate using duperemove
    3. Create a read-only snapshot of the subvolume

    I don’t have a backup server, just an external drive that I only connect during backup.

    Deduplication is mediocre, I am still looking for snapshot aware duperemove replacement.

    • Jo Miran
      link
      fedilink
      22 years ago

      I’m not trying to start a flame war, but I’m genuinely curious. Why do people like btrfs over zfs? Btrfs seems very much so “not ready for prime time”.

      • @EddyBot@feddit.de
        link
        fedilink
        42 years ago

        btrfs is included in the linux kernel, zfs is not on most distros
        the tiny chance that an externel kernel module borking with a kernel upgrade happens sometimes and is probably scary enough for a lot of people

      • @akash_rawal@lemmy.world
        link
        fedilink
        32 years ago

        Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.

        (All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)

  • @HarriPotero@lemmy.world
    link
    fedilink
    2
    edit-2
    2 years ago

    I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.

    For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.

  • @InverseParallax@lemmy.world
    link
    fedilink
    22 years ago

    Do most of my work on nfs, with zfs backing on raidz2, send snapshots for offline backup.

    Don’t have a serious offsite setup yet, but it’s coming.