Anti-aliasing does exactly what the name says: it prevents primitives from assuming aliases, such as a straight (diagonal) line turning into a staircase. Because anti-aliasing usually results in 'soft' edges, the term is sometimes used to apply to any algorithmic 'softening', but it's incorrect to do so.
Assuming your source texture already contains some anti-aliasing to render the curved edges of your car onto a pixel grid (so, if you opened the source PNG or whatever in an art program and zoomed in on the edges you'd see some softness), I think that your code is failing to apply multisampling for whatever reason. If you zoom in and look at the top edge of the roof then check out the transition between the very top of the step one in from the right and the one to its left. The harsh dark colour at the top just spontaneously steps up a pixel. That's symptomatic of that edge being in the original texture and it being copied out pixel by pixel.
GL_LINEAR is a filtering parameter that affects how OpenGL answer questions like 'if the source pixel at (0, 0) is one colour and at (0, 1) is another then what colour is at (0, 0.5)?' If you apply linear filtering then when you scale your texture above its native size the extra pixels that OpenGL has to invent will be created using a linear combination of the nearest source pixels. If you'd used GL_NEAREST then it'd be the colour of whichever source pixel is nearest. So that's the difference between textures that scale up to look blurry and low contrast and textures that scale up to look like mosaics with obvious pixels. So it (usually) adds blurriness and softness to the image overall but isn't really anything to do with anti-aliasing.
With respect to why you're not getting anti-aliasing, two possible reasons sprint to mind. You may have some error in your code or you may simply be using a different algorithm from Cocos2D. Apple's hardware multisampling support arrived only in iOS 4 and Cocos2D predates that so may be sticking to a 'software' method (specifically, rendering the whole scene pixel-by-pixel at 4x the size, then getting the GPU to scale it down). The latter would be significantly slower but would prevent the hardware from attempting to optimise the process. One optimisation that some hardware sometimes applies is to multisample only at the edges of geometry (approximately). That obviously wouldn't benefit you at all.
Another possibility is that you're scaling your image down when you draw (though it doesn't look like it) and Cocos2D is generating mip maps whereas you're not. Mip maps precompute certain scales of image and work from there when drawing to the screen. Doing it that way allows a more expensive algorithm to be applied and tends to lead to less aliasing.
Can you post some code?