Suppose that I have a large model in Simulink, let's call it model A. Now consider a very small subset of model A, call it model B. When model B computes something, these results are sent to other parts of model A and they do certain things as a function of that. However, model B may take a long time to compute - nevertheless, this is not a problem for these other blocks of model A, they are happy to receive data from model B whenever model B is done computing. In essence, I want model B to run in parallel to model A, such that the entire simulation/process is not halted while waiting for model B to finish its things. Is this possible to do in Simulink?
-
1You could try Rate-transition blocks - but I'm not entirely sure, if that works for your case.Robert Seifert– Robert Seifert2016-06-15 18:12:43 +00:00Commented Jun 15, 2016 at 18:12
-
1From your problem description I am not sure if you really want to use parallel computing. It might be possible, but it would inherently result in a simulation which is no longer deterministic. It might be a better idea to run modelB at a lower sample rate.Daniel– Daniel2016-06-15 18:21:38 +00:00Commented Jun 15, 2016 at 18:21
-
This isn't possible within a single Simulink model. You'd need to set Model B up as a completely separate model, running in it's own process, and have Model A and Model B communicate via something like UDP or file transfer, depending on how complex you want things to get.Phil Goddard– Phil Goddard2018-12-17 15:58:01 +00:00Commented Dec 17, 2018 at 15:58
1 Answer
I think dataflow domains are intended to address the issue you described.
It automatically partitions your Simulink model and simulates the subsystem using multiple threads.
In both simulation and code generation of models with dataflow domains, the software identifies possible concurrencies in your system, and partitions the dataflow domain using the two types of parallelism.
- Task Parallelism
- Model Pipeline Execution (Pipelining).
Task Parallelism:
Task parallelism achieves parallelism by splitting up an application into multiple tasks. Task parallelism involves distributing tasks within an application across multiple processing nodes. Some tasks can have data dependency on others, so all tasks do not run at exactly the same time.
Model Pipeline Execution:
The software uses model pipeline execution, or pipelining, to work around the problem of task parallelism where threads do not run completely in parallel. This approach involves modifying the system to introduce delays between tasks where there is a data dependency.
Please have a look at the links below