User login


You are here

Expansion of a function into a basis set

Consider a ``neat'' function such as what an engineer is most likely to use in his typical theory/work. Such a function would typically be: (i) defined over a single finite interval, (ii) continuous throughout, and (iii) smooth enough. In other words, the sort of a function they used when they introduced the very idea of a graph of a function to you, back in high-school. ... Feel free to add any other qualifications you like, but note them explicitly, e.g., (iv) bounded throughout, and (v) periodic. [I don't mind any additional qualifications; basically, as of now, reaching even just the low-hanging fruit would be gratifying.]

Suppose we want to represent this function, may be only in a limiting sense, as a sum of some linearly independent functions. (The basis set itself may be infinite.)

From whatever I know of numerical analysis and computational science and engineering, speaking off-hand, I can think of (only) two methods:

Expansion into
(A) a polynomial basis set
(B) the Fourier-theoretical basis set

These respectively cover the cases of $latex \sum a_n x^n$ and $latex \sum a_n e^{inx}$.

I now have two questions:

Q1. Why don't we use (C) $latex \sum a_n e^{nx}$ in our routine numerical analysis/computational modeling work (e.g. FEM)? Can you think of some application/reference where someone has used it?

Q2. Theoretically, a much more interesting (and demanding) question:

Do these three categories exhaust all the possibilities? Can we logically say so? Is there a mathematical theorem that allows us to say so?
If the answer is a no: What is the obvious objection if I were to formulate this conjecture as a lemma?
If the answer is a yes: Can such a theorem be proved?

What are your thoughts on Q1 and Q2? Care to share? Thanks in advance.

As to me, here I swiftly go:

Q1: Never seen one in a real application; don't know the reason; can't even guess why.

Q2: As one who mathematically has always been much more starry- and dewy-eyed than rigorous, when compared to an average engineer, I am inclined towards immediately jumping to the second conclusion, viz. ``yes.'' The ``reason'' which appeals to me here is that theorem proved by Gauss in 1800, viz., that complex numbers form an algebraically closed field. ... But, rigorously speaking, is this theorem even relevant here? To me, it does seem so; but rigorously speaking, I have no clue.

Anyway, over to you; am listening.




0. Allow me to note a clarification (i.e. a note on how stupid I can so easily get).The clarification is about Q1.

1. Just note the Taylor series expansion of $e^x$. It is $\sum \dfrac{x^n}{n!}$. Ergo, $\sum a_n e^{nx}$ would be already covered by the (A) category i.e. $\sum a_n x^n$. Thus, Q1 is redundant. [Famous Last Words: I should have known...]

2. How could I make such a stupid mistake? Here is the excuse...

Actually, while writing the post, I was simultaneously thinking of something else other than what all I have explicitly noted above, too.

3. The core issue here is the polynomial expansion $\sum a_n x^n$ vs. the Fourier expansion $\sum a_n e^{inx}$. These two make up entirely different categories.

It is well known that a function whose expansion is ``naturally'' expressed in one of them tends to be trouble-some to express in the other. For example, electrical engineers (or anyone dealing with periodic phenomena like the signal-processing folks) are often advised not to try a polynomial expansion, whereas mechanical/civil/aero engineers doing static analyses (but not vibration problems) typically pick up only polynomial expansions (which is the reason why spectral methods in the usual CFD/FEM arrived on the scene so late).

So, I thought up this issue of the mathematical completeness or otherwise of these two categories (as far as serving as the means of expanding an arbitrary but neat function goes).

4. I then wanted to make sure I was covering all possible combinations of the reals and the imaginaries, in both bases and exponents.

In particular, (only) at the back of my mind, I was considering the possibility of $x^{i}$, which isn't explicitly mentioned in the post above. Yet, while writing my post on the fly, one part of my mind remained engaged in ``figuring'' out whether it would be covered by $e^{inx}$ or not. I ``felt'' it was covered, and so, proceeded writing further anyway. (I had not worked out anything in detail on paper.)

My ``feeling'' turns out to be right. But this diversion of the attention meant that I ended up introducing this silly mistake of thinking up Q1. Its redundancy I realized only after publishing the post.

5. Incidentally, if it isn't already clear why $x^i$ gets covered by $e^{ixn}$, see this Math[sic] StackExchange thread here [^].


6. Ok. So, let's get back on the track. Now, only Q2 survives. What do you think of it?


Sorry for this mess, and best,




If I understand correctly, your problem is about the expansion of a given function on a set of other "elementary" functions. If so, then there is a well established mathematical framework for this: Fourier series (or orthogonal expansions)in Hilbert spaces. These spaces are vector spaces endowed with a scalar product. They are similar to the familiar Euclidean space, but it was Hilbert who discovered that the same geometry remains valid in the infinite dimensional case. Then the concept of basis remains the same, if one replaces a finite sum with the sum of a series. The basis functions need not be the sin and cosine functions, although these were at the origin of the concept (the history of the subject is fascinating: Euler, D'Alembert, Bernoulli and Lagrange were already discussing "Fourier" series in connection with the solution of the problem of the vibrating string). Depending on the nature of the problem, one may have polynomials (
Legendre, Chebyshev, Hermite, Laguerre), Bessel functions, spherical harmonics, step (or Walsh) functions, wavelets, etc: once you show that such a system of functions is dense within the whole space, then you can extract a basis, orthogonalize it and then nicely represent any element/function of that space via the Fourier formula for the expansion coefficients. The subject is vast with numerous applications: in the solution of boundary value problems, signal processing, computer graphics, interpolation, etc. Useful references:

Courant and Hilbert: Methods of mathematical physics, vol I.

Sagan: Boundary and eigenvalue problems in mathematical physics.

Hochstadt: The functions of mathematical physics.

Strang: Introduction to applied mathematics.



Dear Stefan,

Thanks for your nice (and easy to read) and very informative reply. Here are my comments, in no particular order. Feel free to correct me and/or add to my knowledge:

1. About the Bessel functions and the spherical harmonics, yes, I should have thought of them.

Can these be seen as formed by combining the polynomial and the complex-exponential forms? I think so. Please correct me if there is an important consideration that I miss.

2. About the ideas such as the window functions, short-time Fourier transforms, the Welch funcions, wavelets, etc.

This class of methods had simply escaped me!!

....Well, that way, I am a bit familiar with these ideas. I had implemented a short-time transform for my software product ToneBrush (the product had provided visualization of music). Even then, this idea had completely escaped me in this context. So, thanks for including it in your list.

There also is another line of questioning that I am currently pursuing, concerning the extent of the support of a solution, particularly for the diffusion equation. This line of thought includes various threads such as the bump function, analyticity, and expansion of support at the beginning of the solution procedure; the sub-domain collocation (in approximate analysis); etc. Now, windowing is an absolutely, superbly and wonderfully relevant concept here, and though I had mentioned wavelets just in the passing in my earlier paper on diffusion, I had somehow forgotten about it by now. Thanks for highlighting its relevance again. (Of course, for that matter, I need to learn wavelets and windowing better anyway!)

(That's another reason why discussions like these have a very definite value. It might have taken me a long time of months, even years, to look at the windowing idea from this viewpoint, simply because it had fallen off my radar over the past 5-odd years.)

3. Another interesting point you mention is showing ``that such a system of functions is dense within the whole space.'' I would like to pursue the tools and the techniques they use, to show such things. If you can think of any simplified notes on the topic or so, please drop a line.

4. However, let's now note a bit about the nature of the statement you make: Starting with some specific functions defined some particular way, if they are found to form a dense system, then one can extract a basis for expanding any arbitrary function defined in that space. So, the flow is: from specific functions to obtaining a basis set for expansion in that space. However, I also had another concern. Apart from listing two main categories of such functions (a list which needs to be expanded now), it was also about completeness; see the next point.

5. The 4 combinations of reals and imaginaries (and thanks to Stefan's reply, also, the higher-level functions/series obtained after combining them) seems to exhaust all the possibilities of constructing such functions that they can at all yield a basis set. The higher-level or more abstract products such as the Bessel functions are included in the combinations, and so are those obtained using the windowing idea.

So, all such categories of functions seem to come from only those 4 combinations of {base}^{exponent} where each of {base} and {exponent} is either a real or an imaginary. This fact seems crucially to rely on Gauss' theorem concerning algebraic closure. The question is: do these 4 combinations really exhaust the ingradients of building any such functions? Are the 4 combinations, all there is to it? ... If Guass' theorem is crucially relevant here, then the completeness is proved. If it is relevant, but not crucially relevant, then not. So, which way does it go? ... I have no real idea, even though I think the answer is yes.

7. Just one more point before we close. Granted the general theory, there still seems to be something to this distinction of polynomials vs. complex-exponentials. For instance, given an initial concentration profile in the form of a bump function, if you try its transient diffusion, the polynomials approach would fail, and the Fourier analysis would end up giving you infinite speed of signal propagation. However, for a stress analysis problem, say linear strain triangle, using (co)sinusoidals seems not so well-suited an idea. ... Anyway, we can keep this point aside for the moment; it's not important; the provability of the completeness of the 4 combinations is. 

8. To all: I do want to make a separate note of how exceptionally helpful/fruitful/productive Stefan's reply here has been for me.

[... OK, in the meanwhile, allow me to pursue the references given, from the viewpoints of current interest.]



The completeness proofs I know of are based either on the Weierstrass theorem (polynomials are dense with respect to the uniform convergence in the space of continuous functions defined over a compact interval), or on the Hilbert-Schmidt theory of self-adjoint operators. The first approach is natural for polynomials and related functions (trigonometric polynomials, spherical harmonics), while the latter is most often associated with "special" functions. That is, most often the basis functions are "special", meaning that they are eigenvalues of a certain differential operator (solutions of a differential equation). And just as in the finite-dimensional case (where each symmetric operator can be diagonalized, the set of its eigenvectors being a basis for the whole space), the set of eigenfunctions of a self-adjoint operator (this time operating on a Hilbert space of functions) is also a basis (modulo some technical details, this is the essence of the Hilbert-Schmidt theorem). Many special functions have their origins in the search for the solutions of certain partial differential equations and thus material on this topic can be found in appropriate textbooks. A very readable one is, for example:

Guenther and Lee: Partial differential equations of mathematical
physics and integral equations.




Hi Stefan,

Thanks for your helpful reply again, but since I got caught in a couple of job interviews, I was travelling, and so could not come back to you immediately. In fact, as the latest post at my personal blog I rapidly wrote indicates, that's how it's been going, and it will continue to go that way for some time, say until mid-July. So, please excuse me if I am not always timely in replying (esp. if it involves reading a book or thinking a lot before replying).

In the meanwhile, let me note just one point rapidly, even if I am not familiar with the Hilbert-Schmidt theory. If it's a theory that crucially depends on the self-adjoint operators, then, no, I don't think we are considering the problem of function expansion in full generality. That's my gut feel, but let me think about it more, and come back to you at a later date (perhaps much later).

In the meanwhile, thanks in advance for bearing with me, and best regards,




Subscribe to Comments for "Expansion of a function into a basis set"

Recent comments

More comments


Subscribe to Syndicate