2

Both scipy and numpy has least square estimation but I got slightly confused after reading the documentation.

So, my problem is classical regression where I am trying to find the best matrix transformation between two pairs of values. So something of the form:

Ax = b

Here, I know x and b. A has 9 unique components and x and b are 3D vectors. So, I need at least 3 points. So, 3 non-collinear x and b and I have them and I can create a 3x3 concatenated x and b vectors.

However, from the documentation, I see that it is designed for solving systems when A and b are known. So we solve for x. Assuming A is invertible, this would mean something like:

x = A(-1)b          (-1) indicating the inverse or pseudo inverse.

In my case, the solution becomes

A = b x(-1)

I was wondering if I can still somehow use the built in numpy machinery for my setup.

1
  • 1
    The pseudo inverse of a vector x is the matrix inv(t(x)*x)*t(x) where t(x) is x transposed and * is matrix product. All of that can be done in python. There's also functions like numpy.linalg.lstsq() and modules like StatsModel. There's also other stack overflow Q&A on this topic in python... chances that this is a duplicate question are pretty high. Commented Jan 13, 2015 at 12:34

1 Answer 1

3

Look at this:

Ax = b
x^TA^T = b^T

where A^T indicates the transpose of A. Now define the symbols Ap=x^T and Xp = A^T and bp=b^T and your problem becomes:

Ap Xp = bp

that is exactly in the form that you can treat with least squares

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.