1

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.

cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
  • The top of the script file is #!/usr/bin/perl -X
  • The output expect from this script is changes in database
  • I have another script file with the same modification and still works fine
  • When I run the file in the browser it works fine and execute all lines without any problem
  1. I tried full path /usr/bin/perl but it didn't work
  2. I tried Perl at the beginning but it didn't work
  3. I run the command from SSH using putty but nothing happened
  4. I checked log file /var/log/cron but no errors at all
  5. I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty

Here is the solution:-

I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed

You can find your file process like this ps aux | grep 'your cron file here'

5
  • 1
    You should use MAILTO = [email protected] in crontab file to be notified about the source of the problem. Please visit webpage for explanation. Commented Apr 16, 2020 at 0:15
  • Following webpage gives other more detailed example. Commented Apr 16, 2020 at 0:17
  • I just did but nothing received to my mail Commented Apr 16, 2020 at 0:24
  • Check mail settings on your computer -- there is high chance that something is not configured properly. Commented Apr 16, 2020 at 0:26
  • Be sure to remove all the redirects to ensure you get email output if throws an error from cron. Commented Apr 16, 2020 at 1:15

1 Answer 1

3

This is a really common antipattern people seem to tend toward with cron.

Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.

At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )

The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.

Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.

I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:

(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.

Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks for your explanation. First, I did a test to the mail setup using a temporary cron job and I received the mail immediately 1 * * * * echo "Test" which means, the mail setup working fine, but I didn't receive any mails or errors belong to the script.pl. Then I did the log file trick >> /var/log/myscript/current.log and again the log is empty. In the same time when I run the file in the browser it works fine and execute all lines without any problem. So, what may cause this issue
so actually, you are describing a CGI script, right? When a web server runs a CGI script it also sets some environment variables like REQUEST_URI, HTTP_SERVER, and others. It might be that your script will only run correctly when run under a normal CGI environment. What a lot of people do for this is have a cron job that fetches the URL instead of running the script directly. like wget -O- -nv http://localhost/path/to/script.pl >> /var/log/myscript/current.log
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.