1

I want to execute a command on an Amazon EC2 server which takes more than 30 minutes.

I used node-ssh to execute the command on the EC2 instance, but this module waits until the execution is finished. Lambda functions run for a maximum of 15 minutes before it times-out.

My problem here is the Lambda function gets timeout before execution is finished.

In dataProcess function I have few steps:

  1. Establish connection - This is working
  2. Initiate copy: Copy file from S3 to EC2 instance
  3. triggerUtility: Execute files on the EC2 instance

Step 3 takes almost 30-40min, but Lambda has max 15 minutes execution time, so this results in a timeout failure.

Need solution where I can just fire and forget the command.

exports.handler = (event, context, callback) => {
    if (event.Records && event.Records[0]) {
        const awsBucket = event.Records[0].s3.bucket.name;
        const fileKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
        dataProcess();

    }
}

async function dataProcess() {
    await sftpConnect();
    await initiateCopy();
    await triggerUtility();
    await terminateConnection();
}
function sftpConnect() {
    console.log('sftpConnect 1');
    return new Promise((resolve, reject) => {
        var password = process.env.PASSWORD || CONFIG.PASSWORD;
        ssh.connect({
            host: process.env.HOST ,
            username: process.env.USER ,
            password: password,
            port: process.env.PORT,
            readyTimeout: 40000,
            tryKeyboard: true,
            onKeyboardInteractive: (name, instructions, instructionsLang, prompts, finish) => {
                if (prompts.length > 0 && prompts[0].prompt.toLowerCase().includes('password')) {
                    finish([password]);
                }
            }
        }).then(function() {
            console.log("Connected to Source");
            resolve('success');
        }, function(error) {
            console.log("ERROR connecting to source:" + process.env.HOST || CONFIG.HOST + " - ", error);
            reject(new Error(error));
        });
    });
}
function initiateCopy() {
    return new Promise((resolve, reject) => {
        var cmd = process.env.AWS_CP_CMD || CONFIG.AWS_CP_CMD;
        cmd = cmd.replace('TODAY', TODAY).replace('TODAY', TODAY);
        console.log('AWS Cmd ', cmd);
        ssh.execCommand(cmd).then(function(output) {
            resolve('resolved 1');
        }, function(error) {
            reject(new Error(error));
        });
    });
}
function triggerUtility() {
    return new Promise((resolve, reject) => {
        var cmd = process.env.JAVA_UTILITY_CMD || CONFIG.JAVA_UTILITY_CMD;
        ssh.execCommand(cmd).then(function(output) {
            resolve('resolved 1');
        }, function(error) {
            reject(new Error(error));
        });
    });
}
2
  • It is unclear which steps are done on Lambda and which parts are done on EC2. Can you clarify this, or perhaps explain why you are using Lambda? Commented Aug 26, 2019 at 10:42
  • All steps are performed by lambda. 1. Establish Conneciton. 2. Copy 3.Trigger utility 4.Close Connection. Step 1 & 2 execute soomthly as it take only 1min. Step 3, Takes 30-40min, Becoz of this lambda gets timeout. I want to execute this step3 like fire n Forget. just initiate step 3 and don't wait for promise.Proceed further execution i.e. close connection. Step 3 should execute on ec2 server in background. Commented Aug 26, 2019 at 10:48

4 Answers 4

2

You should try executing the process on the EC2 server in the background. There are several ways to accomplish this, but the simplest way is to add an & to the end of the shell command. This will launch the Linux process in the background after which you should be able to exit the SSH connection and end the Lambda function execution.

Sign up to request clarification or add additional context in comments.

3 Comments

A nohup + & should run the process in the background, that should be the best option here. @OP: Consider editing the following with nohup var cmd = process.env.JAVA_UTILITY_CMD || CONFIG.JAVA_UTILITY_CMD;
tried with & process is not running on EC2, it gets end. Step 3 is trigger jar for execution java -Dlog4j.configuration=file:log4j.properties -jar dynamodb-updater-log4j.jar "/home/abcusr/DataUpload/demo" & Above cmd should execute on server and lambda should gracefully exit
@KishorSawant did you try adding nohup as well, like suggested above?
1

Consider changing your EC2-side code so rather than performing the copy and trigger from Lambda, the EC2 instance watches an SQS queue and begins the copy upon receiving a message. Your Lambda code should exit immediately after sending the message to the queue. If you need to do some work in Lambda post-trigger, you can send a message from the EC2 instance to another queue which will then trigger a new Lambda function of its own.

This has the added advantage of being very scalable.

Comments

0

As you say, the code runs for longer than 15 minutes, so it is not suitable for Lambda. You could try increasing the memory to see if it runs faster, but it is unlikely to be fast enough.

You should therefore design a solution that does not require an AWS Lambda function. This can be done by running fully on an Amazon EC2 instance.

1 Comment

The part that takes a long time is on the EC2 server, the Lambda function is just sitting there waiting, so increasing the Lambda memory wouldn't help.
0

System manager's run command should work for this scenario.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.