FFT on ATMega32 - Fourier Analysis



When I was studying at King’s College, I remember the professor started out the class explaining that any irregular wave or signal could be described as a sum of sine and cosine waves of different frequencies and amplitudes...

Starting from first principles, and without any notes he wrote:

 f(x) = ASin(w1t) + BCos(w2t)...

After 2 hours I had a stack of papers full of equations. It is a very simple idea but the basics always get overlooked.

Learning about the Fast Fourier Transform (FFT) and signal processing was one of the reasons why I decided to study Electronic Engineering in 1992. Unfortunately, the only place I could afford to study was at my local college in Croydon. It turned out to be a very poor quality experience that swiftly put an end to all of my educational goals.

I have always been very interested in the implementation of the FFT. At university I wrote my first program in C utilising a DFT algorithm. A little later, I wrote another program implementing a FFT algorithm. Although C was fast, I realised that I could make it even faster using 80x86 assembly language.

In 2004 I ported my assembly language code to the AVR microcontrollers.  Fourier was also one of the reasons why I decided to continue assembly language and keep up my practice.

My current FFT implementation for the ATMega32 is in assembly language. It is the fastest utilising the built in hardware multiplier.

When most engineers evaluate the ATMega32, they tend to focus on blinking LEDs, and port operations. They do not realise that it also has powerful math functions. The MAC (Multiply and Accumulate) feature, is hardware based and performs very fast multiplication operations. This is ideal for implementing FFT algorithms.

Fourier Basics



If you are just starting out, then the DFT algorithm is the best place to start. So what do all these formulas actually mean?



F(u) signifies the Fourier Transform, whereas f(x) signifies the signal. F is actually an operator, just like + or - is an operator. You have to do something to the value when you see an operator. In this case F is actually a set of steps you need to perform to change a value to something else.

Mathematics is very object oriented, you can have a symbol such as F to represent a set of steps. You can define how F behaves in different circumstances with basic operators such as +, or x or, division. Mathematicians like to modularise equations into truths, which they then later simply plug into other equations to make larger equations.

If you take a sample of a signal f(x) using an Analogue to Digital Converter then you have the amplitude value of the signal at time t. Taking continuous samples at regular periods gives you the amplitudes at various times t. You could plot this data with amplitude on the y axis and time on the x axis and it would give you an approximate picture of the signal. Therefore the data that you have is said to be in the time domain. This is really basic stuff and not rocket science as it is made out to be.

What the Fourier transform F(x) does is to transform the data from being in the time domain to being in the frequency domain. Then, each piece of data signifies the power of a signal at a particular frequency. When you plot the transformed data you have power on the y axis and frequency on the x axis. Therefore the data is said to be in the frequency domain.

The transform involves sweeping through a range of frequencies. The transformed data then indicates how much of each frequency component is present in the data set.

I bet nobody has made it this easy to understand, because it really is that easy.

Function Symmetry





The above equation describes a symmetry. If you push your frequency domain data through the same process it transforms it all back to the time domain! This is a very useful feature in electronics and computing. Imagine you have a sound sample that contains noise, you could convert all the data from the time domain to frequency domain, plot a graph so you can see visually which frequency is the noise, remove the offending frequency, and then convert it all back to the time domain again and listen to it. Oops, I've just given away how a filter works...

Finite






The two equations shown above are the same, except the integral sign is removed and replaced by a summation which means the same thing. Remember the old mathematics days of using strips to measure the area under a curve? Also instead of using infinity we use N. Infinity represents a continuous signal that has no beginning or ending. In reality, and for practical purposes (life being finite! :-) signals have to be finite so N is used.

Return of Euler





Anyone who did  maths at school will have come across eulers formula. I knew there was some purpose to it! This is a standard notation and you just plug it into the above equations to give:




Here is another standard notation to plug in:




where,




Since waves are complex, we describe f(x) as a complex wave like this
:



plug it all in here to give:


The two Pairs



This gives us the real part:


This is the imaginary part:



Written simply as:



Example DFT Program



You can see straight away that the equations shown above could be easily turned into a computer program. Already, we have a basis for something that could be implemented.

double fx[N][2];
double Fu[N][2];

for (u=0; u<N; u++) {
    for (k=0; k<N; k++) {
        p = 2*PI*u*k/N;
        Fu[u][0] += fx[k][0]*cos(p) + fx[k][1]*sin(p);         /* real part*/
       
        Fu[u][1] += fx[k][1]*cos(p) - fx[k][0]*sin(p);         /* imaginary part*/
    }

    /* multiply the result by 1/N */
    Fu[u][0]/=N;
    Fu[u][1]/=N;
}

Of course, this program would be very inefficient as we are calculating cos and sin values all the time which consumes processor clock cycles. A lookup table for all the values would speed it up a bit. Also we are accessing an array through an index, a faster method would be to use pointers for direct addressing: *ptr

Fast Fourier Transform FFT



Whether you use the DFT algorithm, or the FFT algorithm, the principle behind both is the same. You are simply using a different set of calculation steps to reduce the processing power.

Computer processors consume the greatest clock cycles during the multiplication step. It is therefore important to minimise the number of multiplication steps. With the FFT algorithm, the data has to be ordered to reduce the number of multiplication operations.

There is an excess of information on the Internet regarding this subject. There are plenty of ‘experts’ who can explain butterfly operations, and twiddle factors. However, very few who can implement it.
 

ATMega32 Microcontroller Projects
ATMega32 Spectrum Analyser
Donate




















































































































 
Author: Peter J. Vis