I have a very long std_logic_vector of size 752. Every 8 bits in this vector needs to be reversed for a function. So
a(0) <= b(7)
a(1) <= b(6)
a(2) <= b(5)
a(3) <= b(4)
a(4) <= b(3)
a(5) <= b(2)
a(6) <= b(1)
a(7) <= b(0)
And then reversing it for the next 8 bits
a(8) <= b(15)
a(9) <= b(14)
a(10) <= b(13)
a(11) <= b(12)
a(12) <= b(11)
a(13) <= b(10)
a(14) <= b(9)
a(15) <= b(8)
And this just goes on like this for every 8 bits till 752. Is there a better way to do this? I was thinking of using a for loop within a for loop. The first loop is used for checking whether the element if a divisible by multiples of 8-1 , and the second for loop is for reversing the values every 8 bits.
bregardless of whether the value of theaoutput is affected. You want the number of processes equal to the number of times it takes you to assign all ofbeach sensitive to a unique signal event. Your code isn't a Minimal, Complete, and Verifiable example for solving that. Big things can eat simulation resources.