## Optical Fourier Transforms

Introduction

Abstract

Fourier optics entails study of visible light. The study is based upon modern transformation mathematics i.e. the Fourier transformation theory. The Fourier transform theory suites the study of phase masks and cascading of lenses, a practice mostly applied in optical instruments such as telescopes. In 4F optical processing, there is provision of a perfect Fourier transform and a representation of the space-invariant image.  The objectives of this project is to investigate how masking of light in a Fourier plane affects the formed pattern in the image plane and also, to determine the nature of the various available masks. Lens are built together with a coherent optical processor. The built device is then used in exploring various effects of spatial filters on the produced images. The built 4F system contains 2 similar optical lenses. The focal length of the two lens was found to be 471 mm. Pupil Masks were placed on the Fourier plane and images taken using the CCD camera.  The pupil masks were used to control the intensity.  Analysis of the results show that it possible to use optical image processors to modify an image.  The image transformation is done basing on the principles of Fourier transformations.

Introduction

Diffraction grating splits and diffracts light wave into several beams that travel in multiple directions, acting like a dispersive element (Loewen & Popov, 2013). The grating space and the wavelength of the incoming light determines the directions of the diffracted light. The intensity of light through the slits is dependent on the direction in which the light propagates. The diffraction patterns in diffraction grating correspond to the Fourier transforms. Fourier transformation is important in transforming a time function into a function of frequency. Consequently, periodic functions are decomposed into a combined sum of sine waves (Loewen & Popov, 2013).

When an aperture in front of a lens’ focal plane is illuminated by a spatially coherent monochromatic plane light wave, a diffraction pattern that is a Fourier transform of the transmission function of the aperture is formed in the focal plane of the lens’ back region. If diffraction grating is used, then the Fourier transform consist of the fundamental frequency as well as its harmonics. When a similar set-up with another lens placed at a focal length at the back of the first transform plane is made, a Fourier transform of the of the object’s image (inverted image) is expected when viewed from the back focal plane of the 2nd lens. Spatial filtration is based on the idea that it is always possible to modify components of spatial frequency to recreate the object (Goodman, 2015).

CCD cameras are normally used to record the images by turning light into electrical signals. The CCDs are arranged in columns, with each CCD consisting of grid pixels that measure 2048 X 2048 (Kutay & Zalevsky, 2015).

Fourier transform is a representation equal of an image or a function in terms of the size of each sinusoidal frequency found in the original function. It transforms and maps an input onto an output. The Fourier transform breaks down a signal input into frequency components (Goodman, 2015). The transform is in the form:

The transform above is a 2D data array that evaluates projections of  onto each sinusoid. The part  maps onto , and other oscillating parts of the projection are mapped onto other frequencies.  is a complex spectrum that may be represented as its phase and magnitude or as its imaginary or real parts.

Fourier optics incorporates the meaning of light wave propagation based on linear systems and harmonics analysis. In harmonic analysis, Fourier transform methods are employed in analyzing systems and signals in areas such as separation of harmonic overtones, programming languages and many more areas. Linear systems are useful in formulation of diffraction and imaging. A converging lens can be used to obtain a 2D Fourier transformation, and therefore, it can be used as a transforming material (Loewen & Popov, 2013).

Methodology

Determination of the focal lengths

The focal lengths of both the lenses were determined using a collimator, CCD camera and a computer. The image below shows the setup of the instrument.

Figure 1: Arrangement of the lenses and the CCD for determining the focal length

The focal length () was calculated using the equation shown below.

After obtaining the value of , the equipment was set-up with light passing through the grating before passing through the lenses and finally to the CCD. The set-up is as shown in the figure below.

Figure 2: Equipment set-up

While carrying out the experiment, a quicker way of setting up the equipment was found. The new setup of the equipment is as shown in the diagram below.

Figure 3: Modified Equipment Setup

In the above diagram, . In the equation,   is the distance between the CCD camera and the lens. The setup was based on the fact that the distance between the two lens is twice the focal length, the light emanating from the lens will be collimated. This was checked to ensure that the light was collimated.

Figure 4: The arrangement of the lens and collimation of light

The CCD was then added, and a diffraction grating placed at a distance  to lens 1 to obtain an image on the CCD camera as shown below. The figure below illustrates the final setup that was used to obtain the results.

Figure 5: The diffraction grating and the CCD

A mask having a single slit of 0.5 mm was used. Theoretically, such mask can only allow light from the horizontal grating past point 3. Various images were taken using the CCD camera at different setup configurations.

Results and Discussion

Illumination of collimated light onto an object result to light passing through the transparent portions with no occurrence of diffraction. Looking at a pattern from the diffraction grating images, it can be seen that the optical information of the object is divided into frequencies at different intensities. The image below illustrates the division of the optical information into frequencies. Notice the variation in intensities.

Figure 6: Image captured by CCD Camera

The formation of the above pattern is orthogonal to the grating slits. In the image, it can be seen that the maxima is at the middle. This represents the light that passes straight through the grating. Therefore, it can be concluded that the further the point is from the middle, the higher the bending intensity. The above analysis was based on visual inspection of the image. However, the same analysis was done using a program created in python programming language. This entailed creating a histogram of the image in order to determine how the pixels are distributed. The code snippet below shows the python program.

import cv2

import numpy as np

from matplotlib import pyplot as plt

# Histogram generataion for the selected Mask Location

# Check third argument for Mask

imageHistogram = cv2.calcHist([img],[0],None,[100],[0,100])

plt.subplot(221), plt.imshow(img, ‘gray’)

plt.title(‘Original Image’)

plt.xlim([0,100])

plt.show()

The output of the above python program is as shown in the figure below

Figure 7: Histogram generated by python

In the above figure, the histogram shows that the central part of the image has a high concentration of pixels. The central part represents the central slit, hence the light that passes through the slit is “undiffracted”. This also shows that the point was more exposed as compared to the others. This conforms to what was concluded earlier by visual inspection of the image.

Comparison of the various images taken at various points i.e. at the image plane, Fourier plane and when the masks were on can be seen as shown in the table below.

Table showing images captures at image plane, at the Fourier plane and with the Masks

 Images captured at image plane Images taken in the Fourier Plane IMAGE PLANE WITH MASKS Course grating 3(horizontal) Course grating 1 (Vertical) Course grating 3(horizontal) Course grating 1 (Vertical) Course grating 3(horizontal) Course grating 1 (Vertical) Coarse grating 3 (horizontal lines) and multiple slit 5 (vertical lines) Coarse grating 3 (placed such that the lines are Horizontal) & Multiple slit 5 (placed such that the lines are vertical Coarse grating 3 (horizontal) & Multiple slit 5 (vertical) with ‘large mask’ in vertical position Multiple slit 5 (placed such that the lines are Horizontal) & Multiple slit 3 (placed such that the lines are vertical)

The above table clearly shows how the different conditions results to the formation of the image.  The chart shows various spatial frequencies at different exposure conditions (Stark, 2012). The results further show that at the periphery of the diffraction pattern, there are higher spatial frequencies. Conversely, lower frequencies are at the center of the patterns. In low filtering, the general pattern of a given object can be observed but since there is lack of high spatial frequencies, the details of the images are dulled. Low pass filters, therefore, works by eliminating frequencies above a given cutoff frequency.

In high pass filtering, the location of an object’s edge can be clearly seen. Mathematically, this implies that a high pass filter function transmission is the subtraction of the two end sine functions. This is the reason why the final image intensity highlights an object’s edge. Therefore, it can be confidently concluded that if a large number of frequencies are allowed through, and large number of low frequencies are removed i.e. filtered out, the maxima and the minima becomes sharp.

Intensity in diffraction pattern

Diffraction pattern nature is dependent on the diffracting mask’s nature. A lens used in the setup has capabilities of recombining diffracted light. This results to generation of the mask’s magnified image. Formation of the image from the pattern’s limited proportion results to enhancement of the mask’s elements.

The slit width was found to be 0.5 mm while according to the instrument’s manual the light wavelength is 635 nm. Using this values, the expected intensity pattern can be modeled in python. The python code below models the diffraction intensity of the of the slit using the Fraunhofer theory.

from pylab import *

from math import *

wavelengthMeters = 0.635*10**(-6)# Wavelength in m (Converting 635 nm to equivalent in m)

k=2.0*pi/wavelengthMeters#since the observed minima number is given by the formula 2*pi/wavelength in Meters

slitWidthMeters = 0.5*10**(-3)# Converting the slit width to meters

slitWidthMm = slitWidthMeters * 1000.0# converting the slit width to milimeters

wavelength_nm=wavelengthMeters*10**9# converting the wavelength to nm

numberPoints=500                 # number of points

diffractionIntensity=zeros(numberPoints)

thetx=zeros(numberPoints)

for ithet in range(1,numberPoints):

diffractionIntensity[ithet]=(sin(var1)/var1)**2# calculated intensity

plot(thetx,diffractionIntensity,‘g-‘)

title(‘Diffraction Intensity\n(slit width=%4.2f mm     wavelength=%5.0f nm) ‘ % (slitWidthMm,wavelength_nm))

ylabel(‘Diffraction Intensity’)

grid()

show()

The above python code produces the following diffraction intensity graph.

Figure 8: Diffraction Intensity Graph

The diffraction intensity graph in figure 8 above is based on the Fraunhofer diffraction theory. From the graph, it can be seen that the minima occur where the angle in radians vanishes, i.e. when the intensity becomes zero. On the above graphs, there are several minima. According to the graph, the first minima occurs at an angle of 0.00125 radians. This relationship is represented by the equation

This equation shows that increasing the wavelength results to an increase in the separation between the minima. Reducing the size of the slit such that its width approaches zero results to a faraway minima hence continuous light distribution. A very narrow slit such that the width is much smaller than the wavelength results to no occurrence of minima.

On the other hand, the maxima occur when theta is equal to zero. For a single slit, the value of the maxima can be gotten using the formula

Fourier Analysis of the Captured Images using Python Program

The following python program was written to aid in analysis of the obtained images using the Fourier transform theory.

#This program uses the Numpy fft package to find the Fourier transform of the Image

#The program first inputs the image

#The image is then converted to grayscale

#The size of the output array is then determined

from matplotlib import pyplot as plt#This line imports the matplotlib library

import numpy as numpyPackage #This line of code imports the numpy package

import cv2 #This line of code imports the open cv package

#This line reads the image into the program. To change the image, please change the image name including the extension

#Finding the fourier transform of the image using the fft function from Numpy

var1 = numpyPackage.fft.fft2(inputImage)

var11 = numpyPackage.fft.fftshift(var1)

var2 = numpyPackage.fft.fft2(inputImage2)

var22 = numpyPackage.fft.fftshift(var2)

var3 = numpyPackage.fft.fft2(inputImage3)

var33 = numpyPackage.fft.fftshift(var3)

#Finding the fourier transform of the image varT

varT = 20*numpyPackage.log(numpyPackage.abs(var11))

varT2 = 20*numpyPackage.log(numpyPackage.abs(var22))

varT3 = 20*numpyPackage.log(numpyPackage.abs(var33))

#This line of code Converts the original image to grayscale

plt.subplot(331),plt.imshow(inputImage, cmap = ‘gray’)

plt.subplot(332),plt.imshow(inputImage2, cmap = ‘gray’)

plt.subplot(333),plt.imshow(inputImage3, cmap = ‘gray’)

#This line of code diplsyas the original image in grayscale

plt.subplot(331),plt.title(‘Original Image in Gray Scale\nCaptured at image plane’), plt.xticks([]), plt.yticks([])

plt.subplot(332),plt.title(‘Original Image in Gray Scale\nCaptured Fourier Plane’), plt.xticks([]), plt.yticks([])

plt.subplot(333),plt.title(‘Original Image in Gray Scale\nCaptured at image plane’), plt.xticks([]), plt.yticks([])

#This line of code converts the image to grayscale

plt.subplot(334),plt.imshow(varT, cmap = ‘gray’)

plt.subplot(335),plt.imshow(varT2, cmap = ‘gray’)

plt.subplot(336),plt.imshow(varT3, cmap = ‘gray’)

# #THis line of code displays tthe titles of the transformed image

plt.subplot(334), plt.title(‘\nTransformed Image in Fourier  Domain\nCaptured at image plane’), plt.xticks([]), plt.yticks([])

plt.subplot(335), plt.title(‘Transformed Image in Fourier  Domain\nCaptured Fourier Plane’), plt.xticks([]), plt.yticks([])

plt.subplot(336), plt.title(‘Transformed Image in Fourier  Domain\nCaptured at image plane’), plt.xticks([]), plt.yticks([])

plt.show()

The output of the program is as shown below

Figure 9: Fourier Analysis of Captured Images

The above program written in python decomposes an image into its sine and cosine components. The output represents the image in Fourier Transform. The input image is in spatial domain. Each of the points in Fourier represents a particular frequency that is in the spatial domain image. From the output image, it can be seen that at the central region is whiter. This implies that there is more low frequency content in the images. This is due to the change in wavelength as the light is diffracted.

In figure 9, it can be seen that on the image captured at the image plane, there are components of all frequencies. This is evidenced by a large white part covering almost all the image from the centre. However, the whiteness diminishes as the distance from the centre of the image increases. This implies that the magnitudes of the image get smaller with an increase in frequency. Furthermore, it can be seen that two dominating directions exist. These are vertical and horizontal directions. The dominating directions are as a result of the course grating horizontal and the course grating vertical introduced while taking the images. The same trend is seen in all the other images observed. The transformed outputs show the lines which corresponds to the lines in the original images.

Sources of Errors

In the optical enhancement of the image, inherent errors occur. Also, there are errors due to the equipment setup. The accuracy with which the equipment was setup produced some errors. Furthermore, grain noise was seen in the images obtained by the CCD cameras thus making it hard in viewing the enhancement of images edges. Furthermore, there was refractive index to a certain degree for the filters used during high pass filtering. This refractive index is responsible for the changing of the light’s phase.

Conclusion

Image processors are important in digital image processing. For instance, in microscopy, little transparency is seen in structures having transparency that is similar. Enhancing the edges is not possible due to the smoothness of the images. Optical processors utilize Fourier transformation in modifying an image using various optical means.

# References

Goodman, J. W., 2015. Introduction to Fourier Optics. 3rd ed. New York: Roberts and Company Publishers.

Kutay, A. & Zalevsky, Z., 2015. The Fractional Fourier Transform: With Applications in Optics and Signal Processing. 3rd ed. New York: Wiley.

Loewen, E. & Popov, E., 2013. Diffraction Gratings and Applications. 3rd ed. New York: CRC Press.

Stark, H., 2012. Application of Optical Fourier Transforms. 4th ed. New York: Elsevier.