So, I recently tried doing this:
void Foo(int **bar, int size) {
bar = new int *[size];
for (int i = 0; i < size; i++)
bar[i] = new int[4];
// Do other stuff
return;
}
int main(int argc, char *argv[]) {
int ** data;
int size = atoi(argv[1]);
Foo(data, size);
for (int i = 0; i < size; i++) {
for (int j = 0; j < 4; j++)
cout << "i = " << i << ", j = " << j << ", data = " << data[i][j] << endl;
// Do other stuff
}
for (int i = 0; i < size; i++)
delete[] data[i];
delete[] data;
return 0;
}
and invariably I would get a segfault right at that cout statement. After changing the code so that the array was dynamically allocated in main the problem went away. So, the difference between a segfault is whether or not an array is dynamically allocated and destroyed in the same function. This seems wrong to me as this shouldn't be a problem with traditional C++. I am using a MacBook with Xcode for the g++ command. Can anyone else confirm that Xcode's implementation does this?
dataafter the call toFoo, you should have seen thatdatadidn't change at all. That's what all the answers given to you will explain. Also: stackoverflow.com/questions/27487495/…dataafter I calledFooand I did notice that it hadn't changed. However, I was under the impression that sending any pointer to a function was a pass-by-reference. And the answer given below does explain my problem, thank youFoo. You would have seen thatdatawould still be NULL, thus yourcoutwould be working with a NULL pointer (undefined behavior). And no, sending a pointer is pass-by-value, no different than any other value type being passed. Pass-by-reference in C++ means exactly that -- passing a reference. A pointer is not a reference.