Here is a possibility.
- I used an optimization for the second term, as it places values of
c along the "diagonal" of the tensor.
- For the first term, there isn't much room for optimization, as it is a dense multiplication, so
bsxfun seems appropriate.
- For the third term, I stick to
bsxfun, but as the result is somewhat sparse, you may benefit from filling it "by hand" if the size of your matrix is large.
Here is the code:
dim = 10;
c = [1:dim]';
e = eye(dim);
x = zeros([dim, dim, dim]);
% initialize with second term
x(1:dim*(dim+1)+1:end) = 2 * c;
% add first term
x = x + bsxfun(@times, bsxfun(@times, c, shiftdim(c, -1)), shiftdim(c, -2));
% add third term
x = x - sum(sum(bsxfun(@times, shiftdim(c*c',-3), ...
bsxfun(@times, bsxfun(@times, permute(e, [1, 3, 4, 2, 5]), permute(e, [3, 1, 4, 2, 5])), permute(e, [3, 4, 1, 5, 2])) +...
bsxfun(@times, bsxfun(@times, permute(e, [1, 3, 4, 2, 5]), permute(e, [3, 1, 4, 5, 2])), permute(e, [3, 4, 1, 2, 5])) +...
bsxfun(@times, bsxfun(@times, permute(e, [1, 3, 4, 5, 2]), permute(e, [3, 1, 4, 2, 5])), permute(e, [3, 4, 1, 2, 5]))), 5), 4);
EDIT
A much more efficient (esp. memory-wise) computation of the third term:
ec = bsxfun(@times, e, c);
x = x - ...
bsxfun(@times, ec, shiftdim(c, -2)) -...
bsxfun(@times, c', reshape(ec, [dim, 1, dim])) -....
bsxfun(@times, c, reshape(ec, [1, dim, dim]));
parfor). Since there is no dependency between differentiandjindices, you could calculate each terms in the summation in parallel.parforlook like? Is it much faster compared to the above? I am guessing it might be $d vs d^2$ speed up or something.