EDIT 1: This question has been completely modified for improved clarity and to better state my intent. The original post is at the end.
I think a lot of people are confused by my example code. It was meant to be a purely hypothetical example to teach me what I needed to do to set up my own constraints in a SciPy minimize procedure when the constraints I have are not around x directly, but rather some set of variables inside the objective function (which depend on x). User @Reinderien has given me a lot of insight with their (first) answer, so now I know a bit more about the problem at hand. Here is that problem in the most general sense.
I have a set of k vectors: v1, v2, ... vh, ... vk. Each has m parameters: vh[0], vh[1], ... vh[i], ... vh[m]. The i-th parameter of each vector corresponds the same physical quanity. For example, if these vectors were describing weather in different cities, v1[0] could describe humidity (parameter 0) in Houston (vector 1), and v2[0] subsequently describes humidity (parameter 0) in Austin (vector 2), and so on.
I have an input vector x of length n: x[0], x[1], ... x[j], ... x[n]. Note that n does not necessarily equal m. I don't think it is necessary to show what my specific objective function is (I am trying to keep things general), but just in case more specificity is needed: Each element i in all vh is a function of the entire vector x, where each element j in x is modified by an element ij in some "change" matrix C. (Quick Edit 2: For a better example of what is happening here, vh[i] = vh[i] + x[j] + C[i, j]] for all h, i, and j. This is in a for nested for-loop and the initial vh[i] is retrieved from data, i.e. vh = np.copy(dh) for all h prior to for-loop execution.)
Here is the crux of the optimization problem: I have vectors t, s_lb, and s_ub, all of length m, which correspond to the target value, minimum value, and maximum value for all i in all vh. I want to minimize the sum of the sum of mean squared errors (or just squared errors, I don't think there would be much difference) between t and all vh. This paragraph, written out mathematically, is as follows:
... such that s_lb[i] < vh[i] < s_ub[i] for all i, for all vh.
(Quick Edit 1: In the internal summation, the term m atop the summation should actually be m-1. This is because we start the summation at index 0, not 1.)
I think that about covers it. My only question at this point is how I define matrix A in SciPy's optimize's LinearConstraint; I already have lb and ub as you can see. I've been trying to use the accepted answer here as a reference for what to do, but the nested summation is confusing me.
ORIGINAL POST:
I am restricted to a cell phone at the minute but I could use some help. Below is my best attempt at a minimal reproducible example, but I'm hoping it's not even necessary.
import numpy as np
import scipy as sp
data1 = [1.0, 2.0]
D1_0 = np.asarray(data1)
data2 = [1.5, 2.5]
D2_0 = np.asarray(data2)
specs =[1.25, 2.25]
S = np.asarray(specs)
changes = [[0.1, 0.2], [0.4, 0.5], [0.6, 0.7]]
C = np.asarray(changes)
rows_C, cols_C = C.shape
def get_ssd(D):
ssd = np.sum((np.subtract(D, S))**2)
return ssd
def objective(x):
D1 = np.copy(D1_0)
D2 = np.copy(D2_0)
for i in range(cols_C):
for j in range(rows_C):
D1[i] = D1[i] + x[j] + C[j][i]
D2[i] = D2[i] + x[j] + C[j][i]
minim = get_ssd(D1) + get_ssd(D2)
return minim
x0 = np.zeros(rows_C)
bnds = sp.optimize.Bounds(-1.0, 1.0)
res = sp.optimize.minimize(objective, x0, method='Powell', bounds=bnds)
print(res.x)
If you actually run this code, it's going to execute fine but it isn't really informative. My question is this: I want to incorporate a condition where D1 and D2 must be within some range for all i. I know how other optimization methods can put expression constraints on x but I need something like that for D1 and D2, not x.
My only remotely good thought for achieving this is by putting all of this code inside another optimizer (or even just a for-loop) that changes the bounds on x until the criteria is met. This seems extremely inelegant to me.
I feel like there must be a way to put constraints on variables internal to the objective function and not x. I just don't know how to do that.

x.sum()converge toS - C.sum() - D1_0, but also havex.sum()converge toS - C.sum() - D2_0. This doesn't make sense, and really seems like the problem is ill-posed. If you file a new question, be sure to include more background on what you're "actually doing", because as it stands this is x/y.