Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Curve fitting is a very important concept in mathematics. In calculus, the tange

ID: 1943832 • Letter: C

Question

Curve fitting is a very important concept in mathematics. In calculus, the tangent line "fits" the curve for a short interval beside the point of tangency. You can use a series to develop higher-ordered polynomials that fit a given curve. Is this the same as the Gran-Schmidt orthonormalization? Fourier Series follow a similar approach: using a series of sine or cosine terms to "fit" a curve to a period (probably discontinuous) function. discuss the similarities of these methods in regards to general curve fitting.

Explanation / Answer

A fairly new and interesting topic which has arisen in the field of mathematics is the study of wavelets.  Wavelets are useful for the collection, analysis, and dissemination of information.  They are more specifically used for such things as data compression, filtering out noise, and approximating functions.  Polynomial interpolation is an interesting topic that most mathematics students have worked with.  A specific form of polynomial interpolation is called b-spline interpolation.  In this paper we will explore b-splines and develop orthogonal b-spline wavelet families.

Now we are going to explore b-splines and some of their interesting properties.  As we stated above b-splines are used frequently in the area of polynomial interpolation.  Interpolation is the process of fitting a curve to a known, finite number of ordered pairs, which we will call nodes.  B-splines have the specification that the nodes must be equally spaced, and use the combination of splines to interpolate a finite set of nodes.  A spline is a function where several polynomials defined on distinct, equal sized subintervals are joined together to create a continuous function.  We want these splines to fit together in a very specific way.  There are three important properties, which must be satisfied by all b-splines, which makes the splines piece together as we desire.

The three important properties of b-splines are: interpolation, regularity, and boundary.  Interpolation requires the splines to go through the specified nodes.  Regularity matches up the splines so that they meet at the nodes. Hence, the regularity property regulates the continuity of the interpolating b-spline. Not only do we want the splines to meet at the nodes, we want a ‘smooth curve’.  That is, we want to be able to get as many continuous derivates as possible along the domain of the entire function.  The boundary property is an important property of b-splines that makes sure that the tails of the b-splines flatten out.  To see how b-splines work to interpolate nodes we will specifically consider the quadratic b-Splines.

The quadratic b-splines will take any four equally spaced nodes and interpolate a quadratic function through the four specified nodes.  Consider the four equally spaced nodes:  Below is a plot of the nodes.

From the above plot it appears that we will need three splines.  The first spline will fall between the first and the second node, the second spline will fall between the second and the third node, and the third spline will fall between the third and the fourth node.  Now we have to think about how we can define each spline.  We know that each spline is a quadratic (because we are considering only the quadratic b-splines).  So each spline will be of the form, , where a, b, and c are arbitrary constants that exist in the real numbers.  Since we have three splines, we will have nine unknowns.  Now we are going to figure out how to find nine equations, which we can use to solve for our nine unknowns.

            The three quadratic splines that we must find and fit together are of the form,

The interpolation property of b-splines tells us that we want our three splines to pass through specific nodes.  To make the splines pass through the specific nodes we will need six equations.  We want our first spline to pass through  and ; our second spline to pass through  and ; and we want our third spline to pass through  and .  So we get the following six interpolation equations:

Now we will consider the regularity property.  The regularity property guarantees a certain number of continuous derivatives.  In the case of the quadratic b-splines we only specify that the first derivative will be continuous. So we want the first derivative of the first spline at the point  to be equal to the first derivative of the second spline at the point .  We also want the first derivative of the second spline at the point  to be equal to the first derivative of the third spline at the point .  So we get the two following regularity equations:

We now need to make sure that the tails of the interpolating curves flatten out.  To do this we will use the boundary property.  We will have two more equations, one that sets the first derivative of the first spline equal to zero at , and one that sets the first derivative of the third spline to zero at .  Our last two equations are:

We now have ten equations that will make our interpolating polynomial satisfy the interpolating, boundary, and regularity properties.

            We currently have ten equations and nine unknowns.  We also note that all the equations are linear equations in terms of .  We decide to plug all of our equations into Maple and use the solve command.  We find that our interpolating b-spline has the equations,

We will call this function QBL(t) for specific reasons that we will see shortly.  The graph of the interpolating curve QBL(t) is:

Something important to notice about the interpolating curve QBL(t) is that the specified points,  are satisfied.  Also, note that the curve is smooth, so we know that we will have at least one continuous derivative.  The boundary condition, which states that the tails of the curve flatten out, is also satisfied.  Since the interpolating curve goes to zero as the domain approaches infinity and negative infinity we see that , as discussed in class.

            We have learned how quadratic b-splines and arbitrary b-splines work in general.  One of the reasons that wavelets are useful is that wavelets can approximate data very closely.  A specific wavelet family that is related to b-splines is called the quadratic Battle-Lemarie wavelets.  The scaling function of the quadratic Battle-Lemarie wavelets is the function QBL(t), that is seen above.  So we see that the spline we created above can be used as the scaling function for a wavelet family.  Now we will explore the quadratic Battle-Lemarie scaling function, and we will develop the mother and some daughter wavelets.

            As we saw above,

.

We will change the QBL(t) notation to the more familiar scaling function notation, .   Another way that  can be rewritten is in terms of its dilation equation,

Now that we have the scaling function (the father wavelet) we would like to find the mother wavelet.  From the scaling function we see that the refinement coefficients are:

  and zero for all other  with .  We also know that the scaling function must satisfy the dilation equation, that is , for all .  We can rewrite the dilation equation as, , such that .  In the book Discovering Wavelets, by Steven Schlicker and Edward Aboufadel we observe that, , where . From our refinement coefficients, which we are given above, we find that, .  This information tells us that the mother wavelet is as follows,

To visualize the scaling function and the mother wavelet observe the two plots below,

            Using our formula , we can build a wavelet family and find as many generations of children as we would like.  However, a key property of a multiresolution analysis is for our wavelet family to be mutually orthogonal.  Orthogonality within a wavelet family allows you to use the orthogonal decomposition theorem, which enables you to do nice projections.  So we will test the father and mother wavelet for orthogonality, using the following inner product, .  If the inner product of two functions is zero, then they are orthogonal.  If the inner product is non-zero then the two functions are not orthogonal.  Taking the inner product of the QBL scaling function and the generated mother wavelet we find that their inner product is .01354166667.  Thus the QBL scaling function and mother wavelet are not orthogonal.

            In order to have a multiresolution analysis and use the orthogonal decomposition theorem we need orthogonality.  So we will make the QBL wavelet family into an orthogonal wavelet family.  The development of an orthogonal wavelet family can be done in many different ways.  We decided to construct the orthogonal wavelet family using the Gram-Schmidt process and using Fourier transforms.  First we will study the Gram-Schmidt process.

            It is important to notice that the Gram-Schmidt process takes a non-orthogonal basis, and makes it orthogonal.  The Gram-Schmidt process tells us that given a basis , define

Then  is an orthogonal basis for .  To use the Gram-Schmidt process we first used a series of Maple programs to construct the first n generations of the non-orthogonal QBL wavelet family.  Then we developed a program in Maple that used the Gram-Schmidt process to make the first n generations of non-orthogonal wavelets orthogonal.  Graphically the orthogonal wavelet family differs very little from the non-orthogonal wavelet family.  When using the Gram-Schmidt process the scaling function will be unchanged.  The first wavelet that must be changed is the mother wavelet.  Consider the following plots of the non-orthogonal mother compared to the orthogonal mother wavelet.

We see that they are almost the same.  Using  we can calculate the distance between the orthogonal and the non-orthogonal mother wavelet.  This calculation will give us an idea of how much error is involved in the orthogonalization process.  Using Maple we find that , where MQBL stands for the non-orthogonal mother wavelet and OMQBL stands for the orthogonal mother wavelet.  Since the distance between the two is reasonably small, it seems as though our new orthogonal wavelet family will give reasonable approximations.  To see how the rest of the wavelet family will look the following is the graph of the second generation.  These are the non-orthogonal daughters (the granddaughters).  The non-orthogonal granddaughters are graphed together with the orthogonal granddaughters.

As you can see from the graphs above it appears as though the non-orthogonal granddaughters have been orthogonalized with little error.

Now that we have orthogonalized the quadratic Battle-Lemarie wavelets via the Gram-Schmidt process we will see how well they can be used to approximate  functions.  We will consider two functions: , and .  We will project each respective function onto the first three generations of the orthogonalized quadratic Battle-Lemarie wavelets.  The following plot of  projected onto the orthogonal QBL wavelet families.

This projection seems to be a reasonable approximation of .  This approximation is a good because the b-splines are bell shaped functions.  Therefore, b-splines will more accurately approximate functions which have regions that appear to be bell shaped. The error involved in the projection is the norm of the residual.  Using Maple we calculate the error on the compact support, [0,3], to be .4025. Since b-splines approximate bell shaped functions more accurately the following is an example of a function that will be poorly approximated. The following is a plot of  projected onto the orthogonal QBL wavelet families.

This projection seems to be a reasonably poor approximation of .  The error involved in the projection is the norm of the residual, which turns out to be 5.432.

Now that we have orthogonalized the quadratic Battle-Lemarie wavelets via the Gram-Schmidt process we will do the same thing using Fourier transforms and then compare the results.

            Now that we have established one way to find an orthogonal wavelet basis for B-splines we will direct our attention to an analytic process called the Fourier Transform.  As one might recall Fourier series are useful when working with periodic functions.  However, Fourier transforms are even more useful than Fourier series in that they may be used with non-periodic functions.  There are many types of Fourier transforms that each serves a specific purpose.  Because we are dealing with pieces of continuous functions we will work with the continuous transform.  In work with wavelets, this is often referred to as the continuous wavelet transform. The basic idea of this process is a change of domain.  We begin with a function that exists in a time domain.  Then we take the Fourier transform of the function, converting it to a function in a frequency domain.  Along with this, many properties of the function in the time domain have special parallels to the properties of the function in the frequency domain.   Often times these properties in the frequency domain drastically reduce the difficulty of working with such properties.  Some important ideas include linearity, time shifting,  frequency shifting,  scaling,  differentiation,  time convolution,  and frequency convolution.  These ideas will not be used here,  although they are important in the area of Fourier transforms and wavelet analysis.

            Consider the function  continuous on the real numbers.  The Fourier transform of this function is as follows:

.

This function, which originally depended on t(time) has been transformed into a function that depends on w (frequency).  Observe that i is the imaginary value  from complex analysis.  Thus we will consider i to be a constant value.  Now is a good time to proceed with this introduction by considering an example.

            Example:  Consider the Quadratic Battle-Lemarie scaling function.

As we showed before this function satisfies the dilation equation

                                                        (1)

The Fourier transform of this function is calculated as follows. (Using Maple for the integrals)

Now we are working with the quadratic Battle-Lemarie scaling function in the frequency

domain.  Notice that this function is complex valued.  We will discuss this shortly.  We want this function to be orthogonal to its translates.  Basically the process of creating a scaling function which is orthogonal to it’s translates is as follows.

.

First we must choose the function so that itself and its Fourier transform have reasonable decay.  The function must also satisfy the criteria for a multiresolution analysis.  That is  and  must satisfy the following properties:

Property 1:   and  have reasonable decay.

Property 2:  ,  where .

Property 3:  .

Property 4:  .

Property 5:   must be orthogonal to it’s translates.

If  and  satisfies the above five properties we have a multiresolution analysis.  Below is a definition that is given in Dr. Schlicker and Dr. Aboufadel's book, Discovering Wavelets.

            Definition:  A multiresolution analysis is a nested sequence

of subspaces of  with a scaling function  such that

            1.   is dense in ,

            2.  ,

            3.   if and only if , and

            4.   is an orthonormal basis for .

  From equation (1) above we see that property 1 is satisfied, note that .  Now we must show that property 2 is satisfied.  In Daubechies book we find that .  So we see that .  Using maple we find that .  Hence we see that  Daubechies’ criteria are satisfied.

In order to make the QBL scaling function nicer, we will shift the function to the left one unit.  This may be a process that makes the function more real-valued.  Another interesting thing that Daubechies does in her book is multiplies the Fourier transform of the scaling function by a factor of .  This is actually a rotation of the Fourier transform by an angle of w in the complex plane.  It is not absolutely clear to us as to why this works.  However, it appears that she uses this rotation to make the Fourier transform of the scaling function more real-valued.  We found that other functions may be rotated by angles of some multiple of w making them more, if not completely,  real-valued.  We believe she performs this operation for ease of integration, which will prove to be useful later.  Obviously, the background of most undergraduates does not cover intensive numerical analysis.  Because of this minor ambiguity we will leave this area as a topic of further research for the reader.  After merely shifting the function we have

Notice that this function now satisfies the dilation equation

Using Maple we found that the Fourier transform of this new shifted function is

                                                   .                                (2)

In Daubechies’ Ten Lectures on Wavelets we see that she uses the transform

                                                 .                                 (3)

It is important to show that the real components of functions (2) and (3) are equivalent.  We are only interested in the equivalence of the real values of these functions because the imaginary component makes up a very small portion of the function.  Therefore there may be some error between the imaginary components of (1) and (2).  Thus, we will plot the real-valued components of (2) and (3) simultaneously to see if they are equivalent.

From the above plot we conclude that the real-valued components of (2) and (3) appear to be equivalent.  Now we will plot the imaginary component of the equations (2) and (3) so that we may observe the level of error we will have when only requiring the equivalence of the real-valued component’s of (2) and (3).

Since the difference between the two functions is very small we will, for all practical purposes, consider the difference to be zero.  (Note that this is a lossy process)  It is the case that  is not orthogonal to all of it’s translates.  This leads us to the idea of Daubechies’ orthonormalization trick.

We must make the translates of  orthogonal.  Daubechies’ orthonormalization trick is defined as:

The # sign refers to the function  after being orthonormalized.  This process takes a functions transform, which is a function that is continuous, to a linear combination of the transform.  Then, since it is made up of continuous functions,  is also a continuous function.  Also, it normalizes .  The function  is the transform of our new scaling function that is orthogonal to its translates.  Combining our results from (3) with , we see that

Therefore we see that

                                                                           (4)

To visualize this function, the plots of the real-valued components, and the complex components of  are as follows.

Notice that the imaginary component of  has a significant value as compared to the real-valued components.  Earlier we conjectured that the Fourier transform functions can be rotated by angles of some multiple of wto make them more, if not completely,  real-valued.  Now would be a good time to explore this idea.  However, because the mathematics involved is beyond the scope of this paper,  we will leave this process of minimizing error to the reader.

Thus we must use the inverse Fourier transform in order to find the new equation.  The inverse Fourier transform is

.

However, taking the inverse Fourier transform of  may not always be a reasonable task even with the use of computers.  Therefore we will numerically approximate  using a Maple procedure that uses successive Simpson’s approximations to evaluate the integral.  We can also reduce the number of approximations to a smaller window.  This is important to note since such an approximation will produce some error in the value of the integral.  It is also interesting to notice that the function oscillates about the axis so the small integral values may have a way of canceling each other out.  This tells us that the error may be somewhat small as a result.  The following is a plot of , using the process as described above.

One thing to notice is that the values of  close to zero appear quite different from what we expected.  Using the picture of the graph in Daubechies’ book we approximated the values that she found in these areas.  We are assuming that she must of calculated the limit of the sequence of the function values as they approached the specific points that are in question, to a much higher level of accuracy than we did.  A graph of the approximated  is as follows.

Note that  should be orthogonal to all of it’s translates.  We say, “should be”, because of the way we approximated the function.  Because of the procedure used to approximate  we are not able to make sure that this property is satisfied.

            It is obvious from this paper that the study of wavelets extends far beyond the basic ideas of signal processing.  There are seemingly endless numerical analysis applications as well as many applications in the real world for wavelets.  Wavelets have been used in data compression, noise filtering, seismology,  etc.  The list is very large and is rapidly growing.  The B-splines are very important in that they interpolate data points with continuous functions.  These functions may be a very good starting point for creating orthogonal wavelet families.  Specifically,  they create the orthogonal B-spline wavelet families.  Obviously these are just a specific type of spline.  Thus there may be other splines that rival the B-splines in data processing capabilities.  When starting with a scaling function, as Daubechies did in her book,  one must make sure that the scaling function and its transform satisfy the properties as defined for a multiresolution analysis.  Then one can use either the Gram-Schmidt process or the continuous Fourier transform to create an orthogonal B-spline wavelet family.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote