I have a need to monitor a directory tree for any file changes. On changes I'd like to trigger a backup script (essentially a git commit, but including a database dump).
As I expect there will likely be more changes within a relatively short timeframe I'd like to wait for n seconds before executing the backup script to reduce the number of commits.
To sum up my requirements:
- monitor a directory recursively for file changes
- grace period of n seconds for further changes before execution of backup
- any changes occurring during backup run must not be lost
Using javascript I would do something like the following on any file change:
if (timeout) {
clearTimeout(timeout);
}
timeout = setTimeout(callback, 60000);
I am not clear how to replicate such behavior using a bash script. My current approach is as follows:
watch.sh:
#!/bin/bash
inotifywait \
--recursive \
--monitor \
--event attrib,modify,move,create,delete \
--format %e \
/usr/src/app/html/themes/ |
while read events; do
flock -n /var/run/backup-watch.lockfile -c /usr/src/scripts/watch-backup.sh
done
watch-backup.sh:
#!/bin/bash
# wait for more changes to happen
sleep 60
# run script
/usr/src/scripts/commit-changes.sh
This has the waiting component but will ignore any changes happening while the commit-changes.sh script is running.
Alternatives:
- I considered using a node js script, but the node fs watch command does not allow recursive directory watching, so this is not an option.
- Cron could be an alternative but I'd prefer to watch for changes instead of running on a schedule. The backup script will generate a new commit each time due to the db-dump being part of it. While I could make the backup more complicated by checking if there are any other changes but the db-dump I'd like to keep it simple there.