I have a script that looks something like:
while true; do
read -t10 -d$'\n' input_from_serial_device </dev/ttyS0
# do some costly processing on the string
done
The problem is that it will miss the next input from the serial device because it is burning CPU cycles doing the costly string processing.
I thought I could fix this by using a pipe, on the principle that bash will buffer the input between the two processes:
( while true; do
read -d$'\n' input_from_serial_device </dev/ttyS0
echo $input_from_serial_device
done ) | ( while true; do
read -t10 input_from_first_process
# costly string processing
done )
I firstly want to check that I've understood the pipes correctly and that this will indeed buffer the input between the two processes as I intended. Is this idea correct?
Secondly, if I get the input I'm looking for in the second process, is there a way to immediately kill both processes, rather than exiting from the second and waiting for the next input before exiting the first?
Finally, I realise bash isn't the best way to do this and I'm currently working on a C program, but I'd quite like to get this working as an intermediate solution.
Thank you!
( )pairs. ) Interesting question, but outside of my range of experience and would be hard to test (who has a serial port any more!? ; -) . Good luck.vicreates it work file, might be /var/tmp on some machines). About the same time, I had a colleague try to stuff a 100MB file into a windows pipe, and I knew from testing on the same project, that windows NT definitely had limits (64K or so). So different OS's and versions (as well as permissioning and configuration) may affect this. The best thing is to set up some tests and see what happens. Good luck.