1

I have script.sh so i want to run it via terminal (./script.sh)

If I run script.sh then it should start some other command (ffmpeg in this case) based on some conditions

It works this way but problem is: if i stop/kill script.sh then ffmpeg is also killed/stoped

I think ffmpeg commands or other commands dont need to depend on script.sh even if i close it they should run

#!/bin/bash
while xxxxxxx; do 

startFFMPEG() { 
    nohup $(/root/bin/ffmpeg -i "url" -i "/var/www/logo/logo.png" outputcmd -y "1.m3u8"  </dev/null >/dev/null 2>&1 ||     /root/bin/ffmpeg -i "url" -i "/var/www/logo/logo.png" outputcmd -y "2.m3u8"  </dev/null >/dev/null 2>&1 ||     /root/bin/ffmpeg -i "url" -i "/var/www/logo/logo.png" outputcmd -y "3.m3u8"  </dev/null >/dev/null 2>&1  ) >/dev/null 2>&1 &
}
    echo "RUN COMMAND1" 
    startFFMPEG &


startFFMPEG() { 
        nohup $(/root/bin/ffmpeg -i "url2" -i "/var/www/logo/logo.png" outputcmd -y "11.m3u8"  </dev/null >/dev/null 2>&1 ||     /root/bin/ffmpeg -i "url2" -i "/var/www/logo/logo.png" outputcmd -y "12.m3u8"  </dev/null >/dev/null 2>&1 ||     /root/bin/ffmpeg -i "url3" -i "/var/www/logo/logo.png" outputcmd -y "13.m3u8"  </dev/null >/dev/null 2>&1  ) >/dev/null 2>&1 &
}
    echo "RUN COMMAND 2" 
    startFFMPEG &


sleep 10
done

what can i do to fix this so ffmpeg will run even if script is killed

19
  • 1
    nohup $(...)??? What do you think those lines do? Commented Mar 27, 2021 at 17:59
  • Why do you want to kill the script if you want the commands to continue to run? Your script is the parent program that starts other problems, so when you kill it, then it will not continue starting new ones. All that duplication makes my eyes hurt. Commented Mar 27, 2021 at 18:01
  • if for any reason script.sh is killed, (example i update code in script.sh and then re run) in this case all ffmpeg commands are killed Commented Mar 27, 2021 at 18:03
  • here i just simplified the script coz it has lot of commands, but this is the logic . i only need a way that script will run the ffmpeg command and even if itself is killed , ffmpeg commands will continue to run Commented Mar 27, 2021 at 18:05
  • Send the script a signal instead and have it exit gracefully when it's about to loop. If you want it finish faster, have the script write state where it's currently at, then next time you start it figure the last state and continue to run from there. Commented Mar 27, 2021 at 18:06

3 Answers 3

1

There's no reason to use nohup here -- none whatsoever.

nohup only does four things:

  1. Redirect stdin from /dev/null (if previously coming from a TTY)
  2. Redirect stdout to nohup.out (if previously going to a TTTY)
  3. Redirect stderr to nohup.out (if previously going to a TTY)
  4. Ignore any HUP ("hangup") signals received. (The shell already ignores HUPs by default in noninteractive scripts, and even in an interactive interpreter, disown -h can be used to force the behavior).

Those things are all you need to make sure that a closed terminal doesn't take child processes with it with the same level of efficacy that nohup provides. (If you want ctrl+c to be protected against, you can require setsid or clear handling of SIGINT in addition).

You can do all this yourself. Most of it, you already were doing yourself.

#!/bin/bash

trap : HUP # Tell the shell not to do anything when it gets a HUP

startFFMPEG() {
    trap '' INT  # ignore SIGINT
    for ((i=0; i<3; i++)); do
      /root/bin/ffmpeg \
        -i "url" \
        -i "/var/www/logo/logo.png" \
        outputcmd \
        -y "$((i+1)).m3u8" \
        </dev/null >/dev/null 2>&1 \
        && return
    done
}

while xxxxxxx; do 
    echo "RUN COMMAND1" 
    startFFMPEG &

    echo "RUN COMMAND 2" 
    startFFMPEG &

    sleep 10
done
Sign up to request clarification or add additional context in comments.

7 Comments

Welcome. I'm still not sure what OP is trying to do though
Op doesn't say what is meant with "stop/kill" but the default signal for kill is SIGTERM. As I understand it, Op wants the script to die but sub-processes to continue to run. Not sure trap gives you that behavior.
@AllanWind, the shell doesn't go out of its way to kill jobs it started while it's shutting down in the first place. Subprocesses die because their stdout goes away and they get a SIGPIPE, or their TTY goes away and they get a SIGWINCH, or their entire control group gets a SIGINT in the case of ctrl+c.
@AllanWind, ...so, if the parent shell gets a SIGTERM, nothing needs to be changed to let the children live, absent something else going on at the same time (like the terminal shutting down, or a process being in a command substitution so its stdout is to a FIFO the shell reads, causing a SIGPIPE after that shell dies) causing one of the other mechanisms to become relevant.
there are 20 commands that should run independent, so if i dont use nohup, i need to wait for them one by one, coz never goes to second if first is still runing
|
1

When you kill(3posix) the parent process (script.sh) it will obviously stop executing new commands. Any child processes like the sub-shell $() and the ffmpeg processes will continue to run.

If you interrupt the parent process with ctrl-c, however, SIGINT is sent to the process group which would also terminate the sub-processes. The way you fix that is your run your sub-processes in a new session with setsid:

setsid /root/bin/ffmpeg -i "url" ... &

As @CharlesDuffy points out the children would normally get a SIGPIPE when they try to write to pipe connected to the parent. In this case they have been redirected to /dev/null so no writes will occur.

5 Comments

$() isn't necessarily going to keep running -- the read end of its stdout is a FIFO held by the parent shell, so writes will trigger a SIGPIPE.
He is redirecting those to /dev/null.
Indeed so, so it's safe in the OP's specific case, but not in the general case. Folks reading this answer won't always know which context is important for determining whether statements the answer contains also apply to their code and use cases, so it's helpful to spell things out.
@CharlesDuffy updated answer with your feedback, let me know if I missed the mark or could be more clear.
Update looks good to me. And I've updated my answer to add a caveat about how changing the session group can be relevant if one needs to be able to survive ctrl+c, and not just the terminal exiting.
-1

You can use screen software.

On ubuntu, install it:

sudo apt uopdate
sudo apt install screen

Then, launch screen in a terminal:

screen

To return on your latest screen

screen -r

More infor on this link

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.