Updated based on your comment to @Heatsink, if I understand correctly you're saying that given
struct Data {
int* ptr;
int a, b;
};
and
vector<Data> data;
data[0].ptr points to GPU memory containing data[0].a and data[0].b If this is correct, then I would recommend the following organization instead:
struct Data {
int a, b;
};
thrust::host_vector<Data> h;
thrust::device_vector<Data> d = h;
The GPU memory for h[i] is simply d[i]. I would not recommend storing a pointer per-element to the GPU memory, nor would you want to allocate separate GPU memory for each data object (would be horrifically slow.) Your compute code will still probably be faster if you use separate arrays as well.
Generally organizing your data as structure-of-arrays instead of array-of-structures is preferred for several reasons, including alignment and ease of load coalescing.
ptrpoint to? Is the data read or written on the GPU? Is it possible that several instances ofDatacontain pointers to the same object?ptris a pointer to the device memory individual for eachDataobject. I just now realized that I can write something likecudaMalloc( &h[i].ptr,...)and later assignd=h. Is it true? I can't say why it was confusing me before. ) Do you want to get the accepted answer? Please, formulate your comment as an answer then.