0

From myhost.mydomain.com, I start a nc listener. Then login to another host to start a netcat push to my host:

nc -l 9999 > data.gz &
ssh repo.mydomain.com "cat /path/to/file.gz | nc myhost.mydomain.com 9999"

These two commands are part of a script. Only 32K bytes are sent to the host and the ssh command terminates, the nc listener gets an EOF and it terminates as well.

When I run the ssh command on the command line (i.e. not as part of the script) on myhost.mydomain.com the complete file is downloaded. What's going on?

1
  • 1
    Note: You can achieve the same with ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz Maybe your script is more complex but maybe someone overlooked the simple solution. Commented Aug 11, 2015 at 11:41

2 Answers 2

1

I think there is something else that happens in your script which causes this effect. For example, if you run the second command in the background as well and terminate the script, your OS might kill the background commands during script cleanup.

Also look for set -o pipebreak which terminates all the commands in a pipeline when one of them returns with != 0.

On a second note, the approach looks overly complex to me. Try to reduce it to

ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz

(ssh connects stdout of the remote with the local). It's more clear when you write it like this:

ssh > data.gz repo.mydomain.com "cat /path/to/file.gz"

That way, you can get rid of nc. As far as I know, nc is synchronous, so the second invocation (which sends the data) should only return after all the data has been sent and flushed.

Sign up to request clarification or add additional context in comments.

2 Comments

I can't stream all the data via ssh (or scp it) because of the low compute capacity available on the sender. It is faster (at least 4-5x in my case) to simply netcat it. There's no pipebreak as well. I tweaked the second command to be: ssh repo.mydomain.com "nc myhost.mydomain.com 9999 < /path/to/file.gz". Same observation. Only that this time 4K bytes were sent. This behavior is independent of the file
The 4KB is an indication that nc could only fill the send buffer (which is 4KB) once before it was stopped. Try to wrap nc in a script which runs date 1>&2 (connects stdout with stderr) before and after the command just to see how long it runs. Also try nc -v; that might give you more information how many bytes are written/received. As for speed: The speedup probably happens because ssh encrypts the connection. See serverfault.com/questions/116875/… for ways to optimize
0

I had a similar problem and in the end found out the the login-daemon's configuration was set to termninate all processes on logout.

You may system daemon login configuration for KillUserProcesses (on exit) This overrides nohup, disown a.s.o

The file is called /etc/systemd/logind.conf

# See logind.conf(5) for details.
[Login]
#NAutoVTs=6
#ReserveVT=6
KillUserProcesses=no

The KillUserProcesses - Option was set to yes in my case by default.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.