1

Suppose that I have a large model in Simulink, let's call it model A. Now consider a very small subset of model A, call it model B. When model B computes something, these results are sent to other parts of model A and they do certain things as a function of that. However, model B may take a long time to compute - nevertheless, this is not a problem for these other blocks of model A, they are happy to receive data from model B whenever model B is done computing. In essence, I want model B to run in parallel to model A, such that the entire simulation/process is not halted while waiting for model B to finish its things. Is this possible to do in Simulink?

3
  • 1
    You could try Rate-transition blocks - but I'm not entirely sure, if that works for your case. Commented Jun 15, 2016 at 18:12
  • 1
    From your problem description I am not sure if you really want to use parallel computing. It might be possible, but it would inherently result in a simulation which is no longer deterministic. It might be a better idea to run modelB at a lower sample rate. Commented Jun 15, 2016 at 18:21
  • This isn't possible within a single Simulink model. You'd need to set Model B up as a completely separate model, running in it's own process, and have Model A and Model B communicate via something like UDP or file transfer, depending on how complex you want things to get. Commented Dec 17, 2018 at 15:58

1 Answer 1

2

I think dataflow domains are intended to address the issue you described.

It automatically partitions your Simulink model and simulates the subsystem using multiple threads.

In both simulation and code generation of models with dataflow domains, the software identifies possible concurrencies in your system, and partitions the dataflow domain using the two types of parallelism.

  1. Task Parallelism
  2. Model Pipeline Execution (Pipelining).

Task Parallelism:

Task parallelism achieves parallelism by splitting up an application into multiple tasks. Task parallelism involves distributing tasks within an application across multiple processing nodes. Some tasks can have data dependency on others, so all tasks do not run at exactly the same time.

Model Pipeline Execution:

The software uses model pipeline execution, or pipelining, to work around the problem of task parallelism where threads do not run completely in parallel. This approach involves modifying the system to introduce delays between tasks where there is a data dependency.

Please have a look at the links below

https://www.mathworks.com/help/dsp/ug/dataflow-domains.html

https://www.mathworks.com/help/dsp/ug/multicore-simulation-and-code-generation-of-dataflow-systems.html

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.