13

In this assignment I created the basic rect signal a[n] such that over the domain [-1000, 1000] it's 1 only at |n|<100, meaning it's an array (complex one with zero in the imaginary part) that looks like this [0, 0, ... 0, 0, 1, 1, ... 1, 1, 0, ... 0, 0, 0] where there are exactly 199 ones, 99 on positive and negative and one at 0, looks like that:

I've created also the following FT function, and a threshold function to clean Floating point errors from the results:

import numpy as np
import cmath
import matplotlib.pyplot as plt

D=1000
j = complex(0, 1)
pi = np.pi
N = 2 * D + 1

a=np.zeros(2*D+1)
for i in range(-99,100):
    a[i+D] = 1
threshold = 1e-10
def clean_complex_array(arr, tol=threshold):
    real = np.real(arr)
    imag = np.imag(arr)

    # Snap near-zero components
    real[np.abs(real) < tol] = 0
    imag[np.abs(imag) < tol] = 0
    # Snap components whose fractional part is close to 0 or 1
    real_frac = real - np.round(real)
    imag_frac = imag - np.round(imag)

    real[np.abs(real_frac) < tol] = np.round(real[np.abs(real_frac) < tol])
    imag[np.abs(imag_frac) < tol] = np.round(imag[np.abs(imag_frac) < tol])

    return real + 1j * imag

def fourier_series_transform(data, pos_range, inverse=False):
    full_range = 2 * pos_range + 1
    # Allocate result array
    result = np.zeros(full_range, dtype=complex)

    if inverse:
        # Inverse transform: reconstruct time-domain signal from bk
        for n in range(-pos_range, pos_range+ 1):
            for k in range(-pos_range, pos_range+ 1):
                result[n + pos_range] += data[k + pos_range] * cmath.exp(j * 2 * pi * k * n / full_range)
    else:
        # Forward transform: compute bk from b[n]
        for k in range(-pos_range, pos_range+ 1):
            for n in range(-pos_range, pos_range+ 1):
                result[k + pos_range] += (1 / full_range) * data[n + pos_range] * cmath.exp(-j * 2 * pi * k * n / full_range)

    return result

ak = fourier_series_transform(a, D)
ak = clean_complex_array(ak)

a_k looks like that: (a real sinc signal, which is to be expected)

I've checked that the threshold value is good, FPE starts at around e-14 and there's no significant contributions to the signal below e-8.

Now for the part I had a problem with: we're asked to create the freq signal f_k such that f_k will be a_k padded with 4 zeros after each value and multiplied by 0.2, meaning it will look like this 0.2*[a_0, 0, 0, 0, 0, a_1, 0, 0, 0, 0, a_2, 0, 0, 0, 0, a_3, ...]. We want to show that doing so equals a stretching of the signal in the time domain.

Now when I did the math it checks out, you get 5 copies of the original signal over a range of [-5002, 5002] (to create 10005 samples, which is exactly 5*2001, which was the original number of samples of the signals). The following is the code for this section, to set f_k and f[n]:

stretch_factor = 5
f_k = np.zeros(stretch_factor * N, dtype=complex)
f_k[::stretch_factor] = 0.2 * ak  # scale to keep energy in check
# New domain size after stretching
D_new = (len(f_k) - 1) // 2
# Inverse transform to get f[n]
f_n = fourier_series_transform(f_k, D_new, inverse=True)
f_n = clean_complex_array(f_n)


plt.figure()
plt.plot(np.arange(-D_new, D_new + 1), np.real(f_n), label='Real part')
plt.plot(np.arange(-D_new, D_new + 1), np.imag(f_n), label='Imaginary part', color='red')
plt.grid(True)
plt.title("Compressed signal $f[n]$ after frequency stretching")
plt.xlabel("n")
plt.ylabel("Amplitude")
plt.legend()

and this is what I get:

which is wrong, I should be getting a completely real signal, and as I said it should be 5 identical copies at distance of 2000 from each other, something like that:

Why does it do that? I even tried to use AI to explain why it happens and how to fix it and it couldn't help with either.

9
  • 3
    question might be a better fit for the Signal Processing SE Commented Aug 5 at 19:09
  • I'm not sure if this is an acceptable solution to your assignment, but it's worth noting that if you apply np.abs(f_n) to get the magnitude of the complex number of the signal, the result is the same as the second graph in your question. Commented Aug 5 at 19:11
  • @ChristophRackwitz, i've asked there as well, to better my chances. Commented Aug 5 at 21:02
  • @NickODell it's not acceptable, cutting things short byt inputing a completly real signal a[n] it's expected that f[n] will be completly real as well. Commented Aug 5 at 21:02
  • @Nate3384 whenever you crosspost anything, please link from each post to all the others. it's netiquette to at least do that. Commented Aug 5 at 21:11

2 Answers 2

20

Answer to your question

(initial version of my answer)

You are doing the computation of D and N wrong (and therefore, no, it's not a signal processing question. More an array shaping question ;-))

See what happens if stretch_factor is 2. Then D_old (my naming, but you get it) is 1000. So N_old is 2D+1 = 2001. Now you stretch N by a factor 2, so you have N_new=2002 which isn't even odd. So D_new=(N_new-1)//2 ends up being 1000, and so the rule N=2D+1 on which you count isn't true.

Plus, the resulting f_k array having a non-odd size, that means that it desn't even have a middle. And you expect the middle value to be the "constant" part, right? So, all your stretched frequencies are not really stretched: they are shifted an stretched (since middle value is the freq 0, and it ends up else where in the "stretched and shifted" array...

With an even strech_factor what I say is obvious because there isn't even a middle part.

But even with an odd stretch_factor.

With your example 5 what you got is

f_k.shape # (10005,) # ok it has a middle 5002 values before, 5002 after.
ak.shape # 2001
f_k[::5] = ak*0.2
print(f_k[5002]) # 0 !! when you expected to find there the middle ak[1000] value (times 0.2)
print(f_k[5000]) # there is your 0.2*ak[1000]. 

So you see, your frequency 0 in ak (index 1000) ends up at frequency -2 in f_k (index 5000) (Well -2 units, but you get what I mean)

So, solution => stretch D, not N

    D=(N-1)//2
    D_new = stretch_factor*D
    N_new = 2*D_new+1
    f_k = np.zeros(N_new , dtype=complex)
    f_k[::stretch_factor] = ak/stretch_factor

Another error

(well, it is almost the same one: it is D that you stretch, not N. So, if you multiply the range by 5, the factors full_range should also be multiplied by 5. Said otherwise: full_range in the Fourier computation should be 2D, not 2D+1 (but still, the size of the array is 2D+1)

So fourier code becomes

def fourier_series_transform(data, pos_range, inverse=False):
    full_range=2*pos_range
    # Allocate result array
    result = np.zeros(full_range+1, dtype=complex)

    if inverse:
        # Inverse transform: reconstruct time-domain signal from bk
        for n in range(-pos_range, pos_range+ 1):
            for k in range(-pos_range, pos_range+ 1):
                result[n + pos_range] += data[k + pos_range] * cmath.exp(j * 2 * pi * k * n / full_range)
    else:
        # Forward transform: compute bk from b[n]
        for k in range(-pos_range, pos_range+ 1):
            for n in range(-pos_range, pos_range+ 1):
                result[k + pos_range] += (1 / full_range) * data[n + pos_range] * cmath.exp(-j * 2 * pi * k * n / full_range)

    return result

see how I use full_range+1 when it is about number of element in array, but full_range when it is about the scale of frequencies. So, roughly the same error as before: the middle element (0) doesn't cound when it comes to counting the strech, or frequency scale

Figure once corrected

Which is way closer to what you expected (and no more bunny ears. Which I find sad, even if it is probably better)

Vectorization (not strictly needed, but I can't resist)

Your code is not using numpy right. The whole point of numpy is to avoid iterating on arrays with for loops, as you do. I don't need to explain that too much, since you are probably aware of it: your clean_complex is already using numpy correctly, by computing batches of operations (doing those <, round, ... on whole array, rather than elements by elements)

So same goes for your Fourier computation.

On my slow machine, the non-stretched Fourier is taking a half a minute. But the stretched inverse fourier was taking. Well I don't know, I hadn't the patience to wait, and I started my experimentation with your code by writing a vectorized version, so that I can do trial and errors on your problem without waiting minutes for each trials.

Here is my version of fourier_series_transform

def fourier_series_transform(data, pos_range, inverse=False):
    N = 2 * pos_range
    # Allocate result array
    nn=np.arange(-pos_range, pos_range+1)[:, None]
    kk=np.arange(-pos_range, pos_range+1)[None, :]
    if inverse:
        return (data[None,:] * np.exp(2j*np.pi*kk*nn/full_range)).sum(axis=1)
    else:
        return (1/full_range) * (data[:,None]*np.exp(-2j*np.pi*kk*nn/full_range)).sum(axis=0)

I tried to stay as close as yours.

The key point here, since we need to replace 2 nested loop with a single numpy operation, is to use broadcasting. Broadcasting means that you can combine in an operation arrays whose sizes are different, as long as on each dimension, either they have the same size, or, one of them has size 1. So np.array([[1,2,3], [4,5,6]])*np.array([[10],[20]]) is np.array([[10,20,30], [80,100,120]]).

Here, my nn is an array of size N along axis 0, and 1 along axis 1 ([[-D], [-D+1], [-D+2], ..., [0], [1], ..., [D]]). And kk is iterating along axis 1, and has size along axis 0: [[-D, -D+1, ..., 0, 1, ..., D]]

So, if you combine in an operation some kk and nn, you get a 2D array, with all possible values of n along axis 0 and of k along axis 1.

If I want to use data[k] in that operation, I can use data[None,:] which is an array of size 1 for axis 0 and N for axis 1: [[data[0], ..., data[N-1]].

If I want to use data[n] in that operation, I can use data[:, None] which is also data, but along axis 0 (and size 1 along axis 1): [[data[0]], [data[1]], ..., [data[N-1]]]

And lastly, if I want to sum all values for all possible k, all I need to do is sum along axis 1 (.sum(axis=1)), and all values for all possible n, same along axis 0 (.sum(axis=0))

Maybe the vectorization is easier to understand if I show an intermediate version of it (where I vectorize only the inner loop of each of your two pairs of nested loops)

def fourier_series_transform_innervec(data, pos_range, inverse=False):
    full_range = 2 * pos_range
    # Allocate result array
    result = np.zeros(full_range+1, dtype=complex)
    if inverse:
        # Inverse transform: reconstruct time-domain signal from bk
        for n in range(-pos_range, pos_range+ 1):
            result[n+pos_range] += (data * np.exp(2j*np.pi*np.arange(-pos_range, pos_range+1)*n/full_range)).sum()
    else:
        # Forward transform: compute bk from b[n]
        for k in range(-pos_range, pos_range+ 1):
            result[k+pos_range] += (1/full_range) * (data*np.exp(-2j*np.pi*k*np.arange(-pos_range, pos_range+1)/full_range)).sum()

    return result

Which after all is almost as fast: the important loop to vectorize is the inner one. And it has the advantage of vectorizing only along 1 dimension, so no need for [:,None] and broadcast.

My full code

(well, yours, essentially)

import numpy as np
import cmath
import matplotlib.pyplot as plt

D=1000
j = complex(0, 1)
pi = np.pi
N = 2 * D + 1

a=np.zeros(2*D+1)
for i in range(-99,100):
    a[i+D] = 1
threshold = 1e-10
def clean_complex_array(arr, tol=threshold):
    real = np.real(arr)
    imag = np.imag(arr)

    # Snap near-zero components
    real[np.abs(real) < tol] = 0
    imag[np.abs(imag) < tol] = 0
    # Snap components whose fractional part is close to 0 or 1
    real_frac = real - np.round(real)
    imag_frac = imag - np.round(imag)

    real[np.abs(real_frac) < tol] = np.round(real[np.abs(real_frac) < tol])
    imag[np.abs(imag_frac) < tol] = np.round(imag[np.abs(imag_frac) < tol])

    return real + 1j * imag

def fourier_series_transform(data, pos_range, inverse=False):
    full_range = 2 * pos_range
    # Allocate result array
    nn=np.arange(-pos_range, pos_range+1)[:, None]
    kk=np.arange(-pos_range, pos_range+1)[None, :]
    if inverse:
        return (data[None,:] * np.exp(2j*np.pi*kk*nn/full_range)).sum(axis=1)
    else:
        return (1/full_range) * (data[:,None]*np.exp(-2j*np.pi*kk*nn/full_range)).sum(axis=0)


ak = fourier_series_transform(a, D)
ak = clean_complex_array(ak)

def mystretch(stretch_factor = 5):
    D=(N-1)//2
    D_new = stretch_factor*D
    N_new = 2*D_new+1
    f_k = np.zeros(N_new , dtype=complex)
    f_k[::stretch_factor] = ak/stretch_factor
    # Inverse transform to get f[n]
    f_n = fourier_series_transform(f_k, D_new, inverse=True)
    f_n = clean_complex_array(f_n)
    plt.figure()
    plt.plot(np.arange(-D_new, D_new + 1), np.real(f_n), label='Real part')
    plt.plot(np.arange(-D_new, D_new + 1), np.imag(f_n), label='Imaginary part', color='red')
    plt.grid(True)
    plt.title("Compressed signal $f[n]$ after frequency stretching")
    plt.xlabel("n")
    plt.ylabel("Amplitude")
    plt.legend()
    plt.show()
    return f_n, D_new, D
Sign up to request clarification or add additional context in comments.

5 Comments

thanks for your answer, i must admit, i dont fully understand all the vectorization and broadcasting as coding concepts, i'll have to look them up some more to better understand your answer, but what i can tell is that in a previous iteration of the code i got the same plot like yours here, the "ringing" near the changing points is called "Gibbs Phenomenon" and when i showed this to the TA they said it wouldn't be accapted as an answer, it's suposedly possible to get back a real exact signal.
I see, i'll have to recheck, maybe with another TA to see what they say, because also the math agrees that it should be possible to avoid the "bunny ears" (I liked that, I'm keeping this one). and you're right in the sense that we're required to "reinvent the wheel", really standard procedure in school. Thanks for your help, I appreciate it
@Nate3384 Wait a minute, I think I found another error. Ignore my last comment about bunny ears being unavoidable. (I am just adjusting my last correction, but I am confident you'll have a "no bunny ear" graph)
@Nate3384 Edited without bunny ears
@Nate3384 (and I remove my false comment about bunny ears being. So I repeat the what also in that comment: you can forget all the paragraph about broadcasting and vectorization. It has absolutely no impact on the result. Both code—your version, once corrected, and my "vectorized" version— gives the exact same result. Only difference is computation speed)
7

I think you were applying 5x upsampling before converting the signal from frequency domain back to time domain.

In this case, you should have 4 consecutive zeros between the raw a_k series, and also the tail 4 zeros should not be added. You can cross check your f_k with the sequence generated by sp.signal.upfirdn()

Below is an example that should fit your objective

import scipy as sp

u = 5
f_k = sp.signal.upfirdn([1], a_k, u)
f_n = 0.2*sp.fft.ifft(f_k, norm = 'forward')
D_new = (len(f_n)-1)/2
x = np.arange(-D_new,D_new+1)

plt.title(f'time domain f_n (after {u}x upsampling)')
plt.plot(x, np.abs(f_n), label = 'abs')
plt.legend()
plt.grid(True)

which should give a plot like below

enter image description here

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.