0

"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Source 'source1': Job aborted due to stage failure: Serialized task 10:0 was 135562862 bytes, which exceeds max allowed: spark.rpc.message.maxSize (134217728 bytes). Consider increasing spark.rpc.message.maxSize or using broadcast variables for large values."

I have simple adf pipeline which was working fine but started failing from few days.

The source is a REST API call. Can you please help in fixing this?, where can I change the suggested setting.

enter image description here

enter image description here

0

1 Answer 1

0

StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Source 'source1': Job aborted due to stage failure: Serialized task 10:0 was 135562862 bytes, which exceeds max allowed: spark.rpc.message.maxSize (134217728 bytes). Consider increasing spark.rpc.message.maxSize or using broadcast variables for large values.

According to the error instruction, the problem appears to be caused by overloaded spark resources. Please experiment with an Integration runtime and increase the compute size of the Data Flow Integration. compute size is size of compute used in spark cluster.

  • Go to linked service and click on pencil icon beside integration runtime. enter image description here
  • Data Flow Runtime >> Compute size enter image description here
Sign up to request clarification or add additional context in comments.

2 Comments

Thanks, Pratik, I tried the above suggestion but it did not work.
Please raise a support ticket for deeper investigation as you are still facing same issue or check if your rest Api is giving data in proper format.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.