2

I started using the NVidia Thrust library which comes as part of the CUDA 4.0 toolkit and wanted to verify something before digging deeper. I can perform the following and have no issues during build:

thrust::host_vector <int> iVec;
thrust::device_vector <int> iVec2;
thrust::host_vector <std::string> sVec;

When I try the following I get a compilation error:

    thrust::device_vector <std::string> sVec2;

What I would like to know is can I assume any data type I can use in an STL vector should be useable in a thrust vector irrespective of whether it's device or host? Or are there limitations here and I shouldn't expect this to work?

The error I get is the following:

c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\device\cuda\for_each.inl(93): error C2027: use of undefined type 'thrust::detail::STATIC_ASSERTION_FAILURE' 1> with 1> [ 1> x=false 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\device\dispatch\for_each.h(56) : see reference to function template instantiation 'RandomAccessIterator thrust::detail::device::cuda::for_each_n(RandomAccessIterator,Size,UnaryFunction)' being compiled 1> with 1> [ 1> RandomAccessIterator=thrust::detail::normal_iterator>, 1> Size=__w64 int, 1> UnaryFunction=thrust::detail::device_destroy_functor 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\device\for_each.inl(43) : see reference to function template instantiation 'RandomAccessIterator thrust::detail::device::dispatch::for_each_n(RandomAccessIterator,Size,UnaryFunction,thrust::detail::cuda_device_space_tag)' being compiled 1> with 1> [ 1> RandomAccessIterator=thrust::detail::normal_iterator>, 1> OutputIterator=thrust::detail::normal_iterator>, 1> Size=__w64 int, 1> UnaryFunction=thrust::detail::device_destroy_functor 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\device\for_each.inl(54) : see reference to function template instantiation 'OutputIterator thrust::detail::device::for_each_n(OutputIterator,Size,UnaryFunction)' being compiled 1> with 1> [ 1> OutputIterator=thrust::detail::normal_iterator>, 1> InputIterator=thrust::detail::normal_iterator>, 1> UnaryFunction=thrust::detail::device_destroy_functor, 1> Size=__w64 int 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\dispatch\for_each.h(72) : see reference to function template instantiation 'InputIterator thrust::detail::device::for_each(InputIterator,InputIterator,UnaryFunction)' being compiled 1> with 1> [ 1> InputIterator=thrust::detail::normal_iterator>, 1> UnaryFunction=thrust::detail::device_destroy_functor 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\for_each.inl(51) : see reference to function template instantiation 'InputIterator thrust::detail::dispatch::for_each(InputIterator,InputIterator,UnaryFunction,thrust::device_space_tag)' being compiled 1> with 1> [ 1> InputIterator=thrust::detail::normal_iterator>, 1> UnaryFunction=thrust::detail::device_destroy_functor 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\for_each.inl(67) : see reference to function template instantiation 'InputIterator thrust::detail::for_each(InputIterator,InputIterator,UnaryFunction)' being compiled 1> with 1> [ 1> InputIterator=thrust::detail::normal_iterator>, 1> UnaryFunction=thrust::detail::device_destroy_functor 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\dispatch\destroy.h(59) : see reference to function template instantiation 'void thrust::for_each>(InputIterator,InputIterator,UnaryFunction)' being compiled 1> with 1> [ 1> ForwardIterator=thrust::detail::normal_iterator>, 1> T=value_type, 1> InputIterator=thrust::detail::normal_iterator>, 1> UnaryFunction=thrust::detail::device_destroy_functor 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\destroy.h(42) : see reference to function template instantiation 'void thrust::detail::dispatch::destroy(ForwardIterator,ForwardIterator,thrust::detail::false_type)' being compiled 1> with 1> [ 1> ForwardIterator=thrust::detail::normal_iterator> 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\vector_base.inl(442) : see reference to function template instantiation 'void thrust::detail::destroy>(ForwardIterator,ForwardIterator)' being compiled 1> with 1> [ 1> Pointer=thrust::device_ptr, 1> ForwardIterator=thrust::detail::normal_iterator> 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\detail\vector_base.inl(440) : while compiling class template member function 'thrust::detail::vector_base::~vector_base(void)' 1> with 1> [ 1> T=std::string, 1> Alloc=thrust::device_malloc_allocator 1> ] 1> c:\program files\nvidia gpu computing toolkit\cuda\v4.0\include\thrust\device_vector.h(55) : see reference to class template instantiation 'thrust::detail::vector_base' being compiled 1> with 1> [ 1> T=std::string, 1> Alloc=thrust::device_malloc_allocator 1> ] 1> c:\users\fsquared\mydata\idata\main.cpp(119) : see reference to class template instantiation 'thrust::device_vector' being compiled 1> with 1> [ 1> T=std::string 1> ] ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

I am using MSCV 2010 here.

1
  • I have no experience with either CUDA or thrust (therefore this is not an answer), but I think the datatype has to be supported on the device, which I doubt for a C++ class to be supported in CUDA. Also keep in mind that std::string is not a simple standard data type, but a C++ class, also not a simple one (much template stuff). It could work with pointers to strings or with plain old char arrays, but all that I'm just guessing. Commented May 9, 2011 at 1:12

1 Answer 1

2

CUDA does not support standard C++ container types in device code, it is basically limited to C++ POD types only. You can define your own classes for use on the GPU, but the constructor and member functions must be defined as CUDA __device__ functions, and there are still a number of limitations on what language features are support in device code.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.