__SPRING 2000__

LAB#11 F. A. Q.

” AM MODULATION“

_______________________________________________________

Hint: To keep things clear, keep in mind the big picture of what you want to do and what you do expect. For this, always refer to the plots in the lab description. They are very useful in keeping track of what to do. If necessary, label these plots for the numerical values you want to use. For example, label the carrier frequencies with 4000Hz and 9000Hz, … etc.

_______________________________________________________

0. Some of these questions don’t seem to make sense with respect to this semester’s lab.

ANSWER: Correct. The FAQs are based on the labs from Fall-1999 (specifically, LAB#12.) Refer to that lab description when certain section numbers are cited.

_______________________________________________________

4.1 Transmitter: myxmit.m

1. How do we change the y-axis range?

ANSWER: Set the zoom capability: type "zoom on" in command window. Or, set the axis range (help axis) as follows: axis([left,right,bottom,top])

2. Am I correct to assume that we multiply x1(t) and x2(t) by two different carrier frequencies (one by 4000 Hz and one by 9000 Hz) or do we multiply by them by the same carrier frequency? Also, should the plot of the spectrogram show that x1(t) and x2(t) overlap over each other, or should there be some definite separation between the two?

ANSWER: The carrier frequency determines the channel. You need two. If they overlap, you’ll never be able to separate them in the receiver (demodulator).

3. I’m confused about the determination of the maximum usable bandwidth. Do we calculate for the two specific signals given to us? Because I can put in chirp signals and invalidate any calculation of wb that I made before. Or are the wb’s supposed to be equal for each carrier frequency, i.e., each signal gets an equal sized wb?

ANSWER: It is the maximum. So how big can the BW be before you interfere with a neighboring channel? Both channels should have the same M.U.B. It should not be signal specific. It should depend only on the carrier frequencies.

2. How do you want us to calculate the maximum usable bandwidth of the BPF filter without using the anfilt() function?

ANSWER: You have to use anfilt(), because that is what is making the BPF. The max usable BW depends on the non-ideal filter that you implement. In this case, that non-ideal filter is the one implemented by anfilt().

4. When testing, I get a spectrogram that is of time length 1 sec, not 1.25 nor 1.6. What could I be doing wrong? I am using all the correct equations and modulations. I’m using the code specgram(yy,1024,Tsim,512) to do the spectrogram.

ANSWER: The length of the spectrogram doesn’t really matter because you just want to verify the frequency content of the signal. Don’t have any suggestions as to why yours would cut off at 1.0. Shouldn’t be using “Tsim” in specgram(); need a sampling frequency.

5. According to the writeup, we should have channel frequencies at 4000 Hz and 9000 Hz. 4000 Hz comes from using w = 2*pi*4000 from the warmup, but why are we using w = 2*pi*1000, as stated in part 4.1(b). Won’t that give us a 1000 Hz channel? Also, what should our inputs to myxmit.m be? Should it be x1t & x2t, the two input signals, as well as the wc for each carrier sinusoid?

ANSWER: The cos(2*pi*1000*t) is an INPUT signal. It makes a nice simple test signal. Then it will be modulated by the 9000 Hz carrier.

6. For problem 4.1(c), how do we find the maximum usable bandwidth? And, basically the same question for the first part of 4.2(a).

ANSWER: Make a sketch of the Fourier transforms to show a typical spectrum for each channel. Look at Fig. 5. You are trying to figure out how close together you can pack the FTs for the two channels.

4.2 Receiver: mytuner.m

1. What does mytuner.m do? Does it simulate the receiver? I assume that mytuner.m should take in any frequency, and in our case, we use fstation as 4000 Hz and 9000 Hz. I am not sure what mixerphase is; can you give me an example? What does the bandwidth depend on, mixerphase? Or it is just a constant for all cases?

ANSWER: mytuner() tunes to ONE station. You give it the frequency of the station. We will be using 4000 Hz or 9000 Hz. The mixer uses a cosine to multiply. The cosine can have a phase. Look at the Figure 1 and Figure 4. Design it when you need it (as a function of the tuner frequency). Don’t HARD-WIRE it.

2. I need help with the terminology of mytuner.m. In the command

vv = (yy,fstation,mixerphase), I understand vv and yy but have the following questions:

fstation – Do you mean the mixer that we multiply by to demodulate, i.e., just a fancy word for “mixer”?

mixerphase – The phase of the mixer, like in Figure 4 of the lab, the psi of cos(wc+psi)?

ANSWER: Actually mixer is a fancy word for multiply by a cosine (fstation = frequency). Yes, mixerphase is referring to the phase of the mixer.

3. When determining the maximum useable bandwidth, is that equal to wb (omega b) or is it equal to 2*wb?

ANSWER: The maximum usable BW is a property of the channel. How wide can the AM channel be? This is what you should determine when answering the question: “what is maximum usable BW.” For example, in commercial AM radio, each station is allocated 10kHz by the FCC. That is the BW they are allowed to use. On the other hand, wb refers to a property of the transmitted signal. wb is the highest frequency in the signal. It is a separate question to ask what is the maximum signal frequency that can be sent through the AM channel.

4. Do you want us to solve the “maximum usable bandwidth” mathematically or can use plots of the frequency responses of the BPF’s?

ANSWER: Use plots of the frequency response to determine the passband width, stopband edges and “transition zone” of the BPF. Then use these numbers to “mathematically” solve for the MUB.

5. I know we are supposed to design a BPF and apply it to our signal and then multiply that with the mixer. However, my filter-applied signal is different in length than my mixer signal, so I can’t multiply it together. I tacked on some zeros to fix it. Will this have any ill affects?

ANSWER: You are generating the cosine signal for the mixer. You can control its length. Make its length equal to the length of the signal out of the BPF.

6. When we first apply the bandpass filter to our signal y(t), we use anfilt() to generate the impulse response for this filter, lets call this vector returned bpf. Then do we convolve bpf with yt, using firfilt() or filter()? Do we need to use an FIR or IIR filter on this signal y(t)??

ANSWER: The simulation is an FIR filter. What you are calling “bpf” is the impulse response of an FIR filter. This should be sufficient information to figure out how toget the output of the BPF filter from the input and the impulse response.

7. To apply the filters, should we convolve in the time domain or multiply in the frequency domain? I am not getting the right results either way. In the time domain, convolution results in a signal that was longer than before by length(filter) – 1 and I can’t determine which part to chop off (if any). So I chop off the first and last 111 samples, and then go on to shift and filter again, but the specgram() of the final signal is not the same as the original signal. In the frequency domain, y(t) is too long for MATLAB to do a Fourier transform on via freqz(). I guess I am looking for guidance on how to choose the domain to filter in, and, if we are to filter in the time

domain, how to handle an increased sample length.

ANSWER: It should not matter which end you chop. If you use filter() you get the same length output. Can’t do a FT in the frequency domain. Come back and take ECE-4270 to learn how to do that.

8. I created a bpf with anfilt.m and then I convolved it with the signal yy coming in. I tried to multiply this by the mixer signal. I keep getting an error; normally I can just chop off the vector, but the size of my convolved signal is 223, and the size of my mixer cosine is 56025!

ANSWER: If your convolved signal is length 223, there must be a bug in what you’re using to convolve. Are you using conv()? Check the lengths of the inputs to conv(). The output length = sum of input lengths minus one.

9. When I play the harp2k signals after going through the whole system, I get the same sound as the original but my spectrogram shows a weaker component at higher frequencies. Is something wrong with my filters or is this to be expected?

ANSWER: Filters are not perfect. They do not have exactly ZERO stopbands. The filter specs only guaranteed that the stopband would be down by 1/100.

10. For designing the BPF, I am confused as to how we generate the impulse. I know that we are supposed to use anfilt(), and the bandwidth and fsim arguments make sense. However, I don’t understand the concept with the frequency. Are we supposed to have one filter for each (as in figure 5) corresponding with the wc of the original x1 and x2? If so, then where does the fstation come in to play, especially if our BFP is supposed to be dynamic?

ANSWER: fstation is a frequency. It tells you where you are tuning. The BPF has to pass frequencies around the location where you are tuning (i.e., where you want to listen, which is around fstation). When you change fstation, the BPF has to change because you are trying to get a new part of the frequency spectrum.

11. I wanted to know whether you wanted us to make the gain of the low pass filter in the function mytuner.m equal to 2 at center frequency, or are we to leave it at 1. (I ask this because in the next parts you ask us to comment on the amplitude of the output signal at different phases.)

ANSWER: Either way: it’s “trivial” to make the gain equal to 2, but it’s OK to leave it at one. Comment on relative amplitude. Later on when you listen, gain won’t matter.

12. I have a quick question about the BW. Can you give me a hint on how to recalculate it in section 4.2(c)? Do we use trial and error or is there a certain formula that can be used?

ANSWER: Find the “transition zone” of the BPF (maybe by trial and error). Decrease the BW by an amount to accommodate the transition zone.

13. When I plot the signals with mixerphase = pi, then the output is almost exactly like the input, and the spectrogram of the output reveals that only the frequencies around 0 Hz have been emphasized ( by the red regions in the spectrogram ). However, with a phase of pi/2, I find that the signal at frequencies other than 0 Hz have not been completely eliminated. Why is there almost complete elimination of unwanted spectral components (at frequencies other than around 0 Hz) with mixerphase = 0 or pi, and not with a mixerphase = pi/2?

ANSWER: This is interesting since theory answers your question. To do so, draw the spectral diagrams (theoretically) of what is supposed to happen. You should be able to get the theory to tell you EXACTLY what will happen to the components around w=0 and those around w = 2*wc. When you do the FT (spectral) diagrams, keep track of the real and imaginary parts of the FT.

14. In 4.2(a), are we supposed to use trial and error to get the correct bandwidth? In 4.2(c) when we recalculate, what does this entail? Secondly, if we have already recalculated the BW in 4.2(a), then is 4.2(c) just a descriptive answer of what is going on or are you looking for some mathematical explanation? Finally, is 2*wb (omega-b) in figure 5 the bandwidth? For real AM radio, the bandwidth is 10kHz but max wb is 5kHz, correct?

ANSWER: Perhaps you will need to use trial and error. Use the same procedure that you did in the warmup. I guess you will have to try a couple of different frequency responses.

Secondly, I would draw a sketch of how the frequency responses overlap for the two channels: show that the passband of one BPF is in the stopband of the other. The answer to your final question is YES.

15. Are there two BPF’s in this section? If we multiply the signal my a mixer, then shouldn’t we be able to use just the LPF stated? What about the other BPF that’s to be made “on the fly”? Isn’t a standard LPF enough, because the mixer is what moves the components inside the LPF?

ANSWER: I think you can do it without the BPF on the front end. HOWEVER, the specifications of the lab require you to use such a BPF. Also, real receivers need a BPF to isolate the channels. The BPF is used before the mixer. The LPF is used after. Hence, in a typical receiver, you will have a BPF at the front, which isolates the frequencies, then you will have the mixer to demodulate your received signal at the frequency that has been determined by the BPF. The last stage in the demodulation process is to apply a LPF to get your signal back.

16. I don’t understand why we need to use the width of the passband, stopband and transition zone to get the BW for our BPF. Can’t I just use the passband width and the specified gain to approximate my BW? If we need to use all those things listed above, how do we put it all together? I’ve already convolved yy with my BPF, and mixed the resulting signal with a sinusoid at the proper frequency (to center my original signal back to where it was to begin with — 6500 Hz?) with its mixerphase. From this point on, what would be my next step? What are the specifications for the LPF?

ANSWER: You do not want the two passbands regions to overlap. Now that you have centered the signal you want around w=0, you want a LPF that has a passband large enough to catch the signal you want. (Look at the previous question for more details.)

17. When testing to see if the input and final output match, are we supposed to compare the input with mytuner.m run twice (using 4000 Hz and 9000 Hz) and add them, or using only one of them? Also, how do you want us to measure the gain of the BPF? How can we tell from the spectrogram that the signal is “down by a factor of 100”?

ANSWER: Run it twice, but listen to them separately. At the transmitter, you should have already ADDED the two channels together. You can measure the signal strength in the spectrogram by getting the color scale. Use the MATLAB command colorbar. Also, the behavior of specgram() and plotspec() are DIFFERENT when they plot. The function plotspec() plots the magnitude of the spectral components, whereas specgram() makes the plot in “db” (which means it plots 20*log10(abs()).) For something to be down by 1/100, it has to be -40 “db” lower. So if you are looking at a specgram() plot, look for a —40 db difference. If using plotspec(), then colorbar gives the amplitude, and the max divided by 100 will be barely visible.

18. I have created a BPF that is supposed to isolate my desired signal received from the transmitting station before applying the mixer. However, from the spectrogram of the isolated signal, it is centered on a different frequency than the original modulated signal. For instance, if I isolate the 9000 Hz signal, I get an identical signal but it is centered on 90 Hz rather than 9000 Hz. What could be causing this reduction by 10^2 in frequency? Secondly, how do I implement the mixerphase? Should I extract the phase of any signal from the transmitter, and if so how do I do it? Finally, in part 4.2(a), how do we apply the constraint of 0.01?

ANSWER: It sounds like your plot is correct, but MATLAB sneakily labeled your plots. Check for a 10^2 somewhere in the margins of your plot. Secondly, mixerphase is an input. It becomes the phase of the cosine that multiplies into the mixer. You will experiment with the phase in a later section of the lab. Finally, This is EXACTLY like the warm-up; except the stopband constraint was 0.1 in the warm-up. Go back through the steps of the warm-up.

19. I don’t understand why we use fstation and mixerphase in mytuner.m, or what they are.

ANSWER: On an AM radio you have a “dial” to select the station. That dial selects the frequency, and we are calling it “fstation.” Mixerphase is just the relative phase between the transmitter and receiver. In practice, the two carrier signals (one is used to modulate the signal at the transmitter and the other is used at the receiver to demodulate the signal.) have the same frequency but different phases. This gives us the capability to test phase mismatches.

20. In 4.2(a), the constraint on the BPF states that “the gain at the neighboring channel that could interfere should be less than 1/100 = .001.” What are we to assume is the frequency for this “neighboring channel” since we are supposed to make the bandwidth as wide as possible?

ANSWER: The center frequencies of the 2 channels are 4000 Hz and 9000 Hz, which are the frequencies of the carrier signals.

21. Somewhere I read that mixerphase = pi/2 creates a dramatic difference. I read in another post that humans can’t hear phase change. Are we supposed to hear a change? I think that I should hear something different but don’t.

ANSWER: Yes, the pi/2 case causes a dramatic amplitude change. You can’t hear phase change, only amplitude change. Such a phase change makes a dramatic impact on the amplitude and you can notice the amplitude change.

22. I still can’t get my filter to meet the specification. How can I exactly determine the passband and stopband? Do I have to print the filter plot out and measure the bands at gains of 0.9 and 0.1 with a ruler and a pencil? Can MATLAB help me do that?

Secondly, when working with the mystery signal, I was able to extract the signal for f = 4000Hz. However, I get very fuzzy sounds for the case of f = 9000Hz. I have varied my BW as many

times as possible, but still get the same result.

In 4.4(a), there is an obvious difference in amplitude of the outputs using the two phase-shifts. How do I make an accurate measurement of the amplitude of these output signals? I tried a plot of the output signal, but it didn’t help; it is not sinusoidal, it is crowded, and the amplitude seems to vary with time.

ANSWER: Paper and pencil might work. In MATLAB use the “Zoom” capability to look carefully at the passband edges and the stopband edges. In answer to the second part of your question, you should know what the BW is from your calculations of the max usable BW. In answer to the question about 4.4(a), are you using a “known” test signal, i.e., the chirp and the sine wave? These have known amplitudes.

23. After I run my yy from 4.1 through the first band pass filter, I get the constant frequency signal in yellow and the varying frequency signal in red on a spectrogram. After the whole mytuner.m function, I get the varying frequency in red at zero (I ran it with the fstation=4k) and a ghost of it at 8k in yellow. Is this right? Technically, shouldn’t those be eliminated in both cases? Is this because we are not using ideal filters?

ANSWER: CORRECT, your filters are not ideal. So the stopband is not zero. It is only guaranteed to be less than 1/100.

24. I am not sure how to determine the maximum usable bandwidth. If the signal of the first frequency is wb1, the signal of the second frequency is wb2, the frequency of the first modulator is wc1, and that of the second is wc2, an equation relating the bandwidth to modulator frequencies should be: |wb1 + wb2| < |wc1 – wc2|= 5000 in this case, correct? Does the real implementation differ from actuality because of the imperfection of the lowpass filter that is used?

ANSWER: Yes this is right but you have to account for NON-IDEAL filters in which the transition band is not zero.

4.3 Testing and Listening

1. I am trying to use harp2k.mat as a test, but the sound length is 1.326 seconds long. I was under the impression that our signal was supposed to be 1.25 seconds long. So, I have created my methods with this implementation in mind and the harp2k.mat won’t work properly.

ANSWER: Don’t assume a fixed length. You should probably have something that detects the length and generates your cosines for that length. You’ll need this later on because the “mystery” signal is even longer. Just do length() after you load it. For a quick fix, just truncate the harp signal. Figure out how many samples you need for 1.25 seconds and copy the first so many samples to another vector.

2. I’m not sure what the filtered sounds should sound like. I’m getting a fuzzy version of the original but the plotspecs look the same. Does this sound right?

ANSWER: It should come through very clear-almost the same as the original (if your system is working correctly).

3. When I test the signals, I still hear the two combined signals. However, when I plot the frequency responses of the two BPF’s and the LPF, I have no intersection. I then tried to shorten the filter lengths, but I still hear the combined signals. I can’t tell which is stronger because the voice is loud to begin with and the music is softer.

ANSWER: At this point the two signals should be frequency shifted to different parts of the spectrum and should be disjoint. Best bet is that the BPF is only partially removing the

other signal. Run mytuner.m for one frequency (say 4000 Hz) and look at the spectrogram at the output of the BPF to make sure that that other channel is almost completely gone. The stopband has a “gain” of 0.01, so the other signal’s spectrum should be down by a factor of 100. Also look at the spectrogram at the output of the mixer. You should not have to play with the filter length. The most likely problem is the location of the BPF. In fact, when I was testing this I had exactly that bug.

4. My tests in 4.1(b) seem o.k., but in 4.3, when I test the harp2k signal, I can hear one of the sounds well, but the other has a residue sound of the first sound. What could be wrong in my coding?

ANSWER: It sounds like the passband on your filter is too wide. Said another way, your stopband is too far from the center frequency of the passband. Shorten your passband width and the sound from the other signal should go away. (i.e. shorten the passband of your filter so that the passabnd of the other frequency lies completely in the stopband of this one).

4.4 Testing and Degradations

1. When psi = pi, should the result be negative? Because v(t) = x(t) cos(pi).. = -x(t); I understand that v(t) = x(t) cos(pi/2) = 0… When I listen to it, -x(t) sounds the same, and the plotspec looks the same too, but shouldn’t it be flipped because of the negative sign?

ANSWER: Maybe Euler would have said: “exp(j*pi) = -1” Think magnitude and phase. What is the magnitude and phase of the output FT? Usually we make spectrum plots of the magnitude and don’t pay attention to the phase. Also, it turns out that the human ear can’t hear a phase change.

2. For this part, which set of input signals do we use? Do we use the signals from harp2k, or do we use the original signals from part 4.1? Then do we need to do the phase experiment on both signals, or do we just choose one of the two? And, if you want us to do this part using the harp2k signals, how do we measure the exact amplitude (considering it is such a complicated signal when looking at the graph); do you want us to say that using phase x inverts the signal, using phase y causes it to die down, etc.?

ANSWER: Use the ones from 4.1, then you have mathematically defined signals that you can trace through the system.

3. For part 4.4(a), what input signal(s) should we analyze, the ones from harp2k.mat, or the ones from section 4.1(b)? I know for part (b), it is supposed to use the signal at 9000 Hz carrier frequency, but which one is it? For part 4.4(b), I need a hint on how to determine the Fourier transform at the mixer output.

ANSWER: Use the ones from 4.1(b), because you want to do some analysis and it helps to have signals that are mathematically defined. Put the sine wave at 9000 Hz, because that’s what part (d) wants. Part 4.4(b) is just more frequency shifting. Look at your notes. This was covered in lecture and in recitation. Also it is discussed in Chapter 13.

4. Is x2(t) supposed to be cos(2*pi*(1000)*t) instead of sine?

ANSWER: The two cases would be more or less the same because there is a phase difference between sine and cosine of pi/2. In the lab, I specified sin()because it forces you to think in terms of a FT that has imaginary values and keeps you focused on the fact that the negative frequency part of the spectrum is different. Bottom Line: use sin().

5. When I put in a phase shift of pi or pi/2, I can hear a difference in the tone, but I am unable to detect a difference in the spectrogram. However, I am able to detect a change in the spectrogram when I change the station frequency by a small amount as in 4.4(d), or when I change the phase with the two signals provided by harp2k.mat. Are the changes in amplitude so small when shifting by pi/2 or pi that they are not as obvious using plotspec(), or do I have a flaw in my design such that I am not getting a good look at the changes using plotspec()?

ANSWER: The PI/2 case will be dramatically DIFFERENT!

6. For the amplitude, do you want a spectrogram comparison or a comparison based on the actual plot of the output signal?

ANSWER: Probably this is easiest with the time signal. Use a short fragment of the signal. Note that specgram() has some pitfalls with real time scaling.

7. Can someone give me a hint on what the exact expression for the final output of the AM system looks like for section 4.4(d)? I am able to derive the spectrum but am unable to figure out the formula. Also, where it says “make a detailed sketch of the Fourier Transform”, may I assume that means the spectrum plot?

ANSWER: If you have the spectrum, it only contains lines. That should be easy to turn into a formula. Your assumption about the spectrum plot is correct.

8. I’m trying to do the x2(t) = sin(2*pi*1000*t) part with math. Through multiplication of frequency responses, I’m getting an equation with about sixteen parts. Is this correct?

ANSWER: You should have impulses in frequency. Eight terms probably, not sixteen.

9. For mixerphase altering, my specs appear to be exactly the same; the only differences I saw were that the yellow coming off of my frequencies goes further at pi/2 than at 0. Am I doing this correctly?

ANSWER: Yellow and red mean something about amplitude. Look at the output signal in the time domain to get a precise measure of the amplitude.

10. When modifying the phase, I used in mytuner.m. I know that my demodulator works correctly at 0 phase, there is a small difference at pi/2, but at pi, I’m expecting to hear nothing, (since, in effect, the signal is multiplied by cos(phase)) however this wasn’t the case.

ANSWER: Cos(pi) = ? (it’s not equal to zero). It will be multiplied by (-1) and hence you will not find any noticeable difference since this is a change in the phase and not in the amplitude (think in amplitude in phase). This is not the case in the PI/2 phase.

11. I know we’re supposed to use the signals from 4.1, but shouldn’t those from part 4.3 work also? (Mine don’t). Everything up to part 4.3 is fine, but if I run it through with a phase of pi/2, I get interference. It was my understanding that a phase change would only affect the amplitude and not the bandwidth.

ANSWER: If the in-phase(0) and quadrature(pi/2) signals are mixed up, then MIGHT have introduced a phase shift (or a time shift) in your processing. If so, you would not be synchronized with the transmitter. Try to find an angle for which the output is ZERO. To do this you have to use trial and error. This will be the angle to use in your system. Also this angle minus 90 degrees should give the other signal.

4.5 Quadrature A. M. System

1. I loaded the file lab12sig.mat, and only one vector came out instead of two. I put that vector through the filter and into one of two frequencies, but I don’t seem to get the sound right. When I change the phase, the sound doesn’t seem to change.

ANSWER: There is only one vector because there is only one signal. It is the combination of the transmitted signals. It has several sounds buried inside. In this case you will only be running the mytuner() function. Lab12sig.mat contains the sum of several transmitters. I ran myxmit() a bunch of times (with some phase mods) and added the signals together. The xtr signal has already been modulated with carrier signals. You don’t have to use myxmit() at all for this part of the lab. And, xtr is the signal out of the transmitter. Equivalently it is the input to the receiver, i.e., the input into mytuner().