Consider this not-so-hypothetical scenario: you have some data on server_a that you would like to copy to server_b. Unfortunately, these two servers cannot communicate with each other. Nor do they have access to any common network-mounted storage. Bummer. I do have a jumpbox from which I can SSH to either server using a service account with passwordless root sudo privileges.

Such accounts often exists in environments that utilize CMS tools and other types of centralized automation. Normally I would SSH to server_a; make a tarball of whatever I need; logout back to the jumpbox; SCP the tarball to the jumpbox; SCP it further to server_b; SSH there and untar; finally, delete the tarballs from all three systems.

This multi-step process is tedious, not very secure, and sometimes requires considerable additional filesystem space. Now imagine a few files changed on the source and you need to transfer just those changes. Ugh…

A much better option is to use sshfs (yum install sshfs). The little script I have below was written for RHEL/CentOS 5-7 and has a couple of prerequisites: a) you have a server that can connect to both server_a and server_b via SSH with key authentication only using a common service account; b) this service account on both server_a and server_b can sudo - root without being prompted for a password.

The basic command syntax is quite simple:

The script will SSH to source and to target and will run sftp-server on both as root. This would allow you to mount filesystems from both machines with root privileges. The script will then create target:/path/ if it doesn’t already exist. Finally, the script will run a basic rsync archive operation to transfer data. The temporary mounts on your jumpbox will then be unmounted and removed.

You can also download this script from my GitHub page. Copy it to a folder of your choice and make a convenient link, say, /sbin/g.