Wednesday, September 28, 2016

The Fourier Transform

Excerpt from Science & Mathematics nautil.us

From article by Aatish Bhatia is a recent physics Ph.D. working at Princeton University to bring science and engineering to a wider audience. He writes the award-winning science blog Empirical Zeal and is on Twitter as @aatishb.



 What was Fourier’s discovery, and why is it useful? Imagine playing a note on a piano. When you press the piano key, a hammer strikes a string that vibrates to and fro at a certain fixed rate (440 times a second for the A note). As the string vibrates, the air molecules around it bounce to and fro, creating a wave of jiggling air molecules that we call sound. If you could watch the air carry out this periodic dance, you’d discover a smooth, undulating, endlessly repeating curve that’s called a sinusoid, or a sine wave. (Clarification: In the example of the piano key, there will really be more than one sine wave produced. The richness of a real piano note comes from the many softer overtones that are produced in addition to the primary sine wave. A piano note can be approximated as a sine wave, but a tuning fork is a more apt example of a sound that is well-approximated by a single sinusoid.)
Now, instead of single key, say you play three keys together to make a chord. The resulting sound wave isn’t as pretty—it looks like a complicated mess. But hidden in that messy sound wave is a simple pattern. After all, the chord was just three keys struck together, and so the messy sound wave that results is really just the sum of three notes (or sine waves).
Fourier’s insight was that this isn’t just a special property of 
musical chords, but applies more generally to any kind of repeating 
wave, be it square, round, squiggly, triangular, whatever. 
The Fourier transform is like a mathematical prism—you feed in a
wave and it spits out the ingredients of that wave—the notes 
(or sine waves) that when added together will reconstruct the wave.
If this sounds a little abstract, here are a few different ways of 
visualizing Fourier’s trick. The first one comes to us from
 Lucas V. Barbosa, a Brazilian physics student who volunteers 
Wikipedia, where he goes by “LucasVB.”
the Fourier transform is a recipe—it tells you exactly how 
much of each note you need to mix together to 
reconstruct the original wave.
And this isn’t just some obscure mathematical trick. The
 Fourier transform shows up nearly everywhere that waves do. The ubiquitous MP3 format uses a variant of Fourier’s trick to
 achieve its tremendous compression over the WAV (pronounced 
“wave”) files that preceded it. An MP3 splits a song into short 
segments. For each audio segment, Fourier’s trick reduces the 
audio wave down to its ingredient notes, which are then stored in 
place of the original wave. The Fourier transform also tells you how 
much of each note contributes to the song, so you know which
 ones are essential. The really high notes aren’t so important 
(our ears can barely hear them), so MP3s throw them out, 
resulting in added data compression. Audiophiles don’t like MP3s
 for this reason—it’s not a lossless audio format, and they claim 
they can hear the difference.
song. It splits the music into chunks, then uses Fourier’s trick to 
figure out the ingredient notes that make up each chunk. It then 
searches a database to see if this “fingerprint” of notes matches 
that of a song they have on file. Speech recognition uses the same 
Fourier-fingerprinting idea to compare the notes in your speech 
to that of a known list of words.
You can even use Fourier’s trick for images. Here’s a great 
video that shows how you can use circles to draw Homer Simpson’s
 face. The online encyclopedia Wolfram Alpha uses a similar idea 
to draw famous people’s faces. This might seem like a trick you’d 
reserve for a very nerdy cocktail party, but it’s also used to 
compress images into JPEG files. In the old days of Microsoft 
Paint, images were saved in bitmap (BMP) files which were a long 
list of numbers encoding the color of every single pixel. JPEG is 
the MP3 of images. To build a JPEG, you first chunk your image 
into tiny squares of 8 by 8 pixels. For each chunk, you use the same
 circle idea that reconstructs Homer Simpson’s face to 
reconstruct this portion of the image. Just as MP3s throw out the 
really high notes, JPEGs throw out the really tiny circles. The 
result is a huge reduction in file size with only a small reduction in 
quality, an insight that led to the visual online world that we all 
love (and that eventually gave us cat GIFs).
How is Fourier’s trick used in science? I put out a call on 
Twitter for scientists to describe how they used Fourier’s idea
 in their work. The response astounded me. The scientists who 
responded were using the Fourier transform to study the 
vibrations of submersible structures interacting with fluids, to 
try to predict upcoming earthquakes, to identify the ingredients 
of very distant galaxies, to search for new physics in the heat 
remnants of the Big Bang, to uncover the structure of proteins from 
X-ray diffraction patterns, to analyze digital signals for NASA, 
to study the acoustics of musical instruments, to refine models 
of the water cycle, to search for pulsars (spinning neutron stars),
 and to understand the structure of molecules using nuclear 
magnetic resonance. The Fourier transform has even been used to
 identify a counterfeit Jackson Pollock painting by 
deciphering the chemicals in the paint.
Whew! That’s quite the legacy for one little math trick.

Wednesday, September 21, 2016

Introduction to Control Systems

For a simple introduction to Control Systems refer to the page below

https://www.facstaff.bucknell.edu/mastascu/eControlHTML/Intro/Intro1.html


Evaluation of Control Systems

Analysis of control system provides crucial insights to control practitioners on why and how feedback control works. Although the use of PID precedes the birth of classical control theory of the 1950s by at least two decades, it is the latter that established the control engineering discipline. The core of classical control theory are the frequency-response-based analysis techniques, namely, Bode and Nyquist plots, stability margins, and so forth.
In particular, by examining the loop gain frequency response of the system in Fig. 19.1.9, that is, L( jw) = Gc( jw)Gp( jw), and the sensitivity function 1/[1 + L(jw)], one can determine the following:
  1. How fast the control system responds to the command or disturbance input (i.e., the bandwidth).
  2. Whether the closed-loop system is stable (Nyquist Stability Theorem); If it is stable, how much dynamic variation it takes to make the system unstable (in terms of the gain and phase change in the plant). It leads to the definition of gain and phase margins. More broadly, it defines how robust the control system is.
  3. How sensitive the performance (or closed-loop transfer function) is to the changes in the parameters of the plant transfer function (described by the sensitivity function).
  4. ThefrequencyrangeandtheamountofattenuationfortheinputandoutputdisturbancesshowninFig.19.1.10 (again described by the sensitivity function).



    Digital Implementation
    Once the controller is designed and simulated successfully, the next step is to digitize it so that it can be pro- grammed into the processor in the digital control hardware. To do this:
    1. Determine the sampling period Ts and the number of bits used in analog-to-digital converter (ADC) and digital-to-analog converter (DAC).
    2. Convert the continuous time transfer function Gc(s) to its corresponding form in discrete time transfer func- tion Gcd(z) using, for example, the Tustin’s method, s = (1/T)(z 1)/(z + 1).
    3. From Gcd(z), derive the difference equation, u(k) = g(u(k 1), u(k 2), . . . y(k), y(k – 1), . . .), where g is a linear algebraic function.
      After the conversion, the sampled data system, with the plant running in continuous time and the controller
    in discrete time, should be verified in simulation first before the actual implementation. The quantization error and sensor noise should also be included to make it realistic.
    The minimum sampling frequency required for a given control system design has not been established ana- lytically. The rule of thumb given in control textbooks is that fs = 1/Ts should be chosen approximately 30 to 60 times the bandwidth of the closed-loop system. Lower-sampling frequency is possible after careful tuning but the aliasing, or signal distortion, will occur when the data to be sampled have significant energy above theNyquist frequency. For this reason, an antialiasing filter is often placed in front of the ADC to filter out the high-frequency contents in the signal.
    Typical ADC and DAC chips have 8, 12, and 16 bits of resolution. It is the length of the binary number used to approximate an analog one. The selection of the resolution depends on the noise level in the sensor signal and the accuracy specification. For example, the sensor noise level, say 0.1 percent, must be below the accuracy spec- ification, say 0.5 percent. Allowing one bit for the sign, an 8-bit ADC with a resolution of 1/27, or 0.8 percent, is not good enough; similarly, a 16-bit ADC with a resolution. 0.003 percent is unnecessary because several bits are “lost” in the sensor noise. Therefore, a 12-bit ADC, which has a resolution of 0.04 percent, is appropriate for this case. This is an example of “error budget,” as it is known among designers, where components are selected economically so that the sources of inaccuracies are distributed evenly.
    Converting Gc(s) to Gcd(z) is a matter of numerical integration. There have been many methods suggested, some are too simple and inaccurate (such as the Euler’s forward and backward methods), others are too com- plex. The Tustin’s method suggested above, also known as trapezoidal method or bilinear transformation, is a good compromise. Once the discrete transfer function Gcd(z) is obtained, finding the corresponding difference equation that can be easily programmed in C is straightforward.

    Once the controller is designed and simulated successfully, the next step is to digitize it so that it can be pro- grammed into the processor in the digital control hardware. To do this:
    1. Determine the sampling period Tand the number of bits used in analog-to-digital converter (ADC) and digital-to-analog converter (DAC).
    2. Convert the continuous time transfer function Gc(s) to its corresponding form in discrete time transfer func- tion Gcd(z) using, for example, the Tustin’s method, (1/T)(− 1)/(1).
    3. From Gcd(z), derive the difference equation, u(kg(u(− 1), u(− 2), . . . y(k), y(– 1), . . .), where is a linear algebraic function.

      After the conversion, the sampled data system, with the plant running in continuous time and the controller
    in discrete time, should be verified in simulation first before the actual implementation. The quantization error and sensor noise should also be included to make it realistic.
    The minimum sampling frequency required for a given control system design has not been established ana- lytically. The rule of thumb given in control textbooks is that f1/Tshould be chosen approximately 30 to 60 times the bandwidth of the closed-loop system. Lower-sampling frequency is possible after careful tuning but the aliasing, or signal distortion, will occur when the data to be sampled have significant energy above theNyquist frequency. For this reason, an antialiasing filter is often placed in front of the ADC to filter out the high-frequency contents in the signal.
    Typical ADC and DAC chips have 8, 12, and 16 bits of resolution. It is the length of the binary number used to approximate an analog one. The selection of the resolution depends on the noise level in the sensor signal and the accuracy specification. For example, the sensor noise level, say 0.1 percent, must be below the accuracy spec- ification, say 0.5 percent. Allowing one bit for the sign, an 8-bit ADC with a resolution of 1/27, or 0.8 percent, is not good enough; similarly, a 16-bit ADC with a resolution. 0.003 percent is unnecessary because several bits are “lost” in the sensor noise. Therefore, a 12-bit ADC, which has a resolution of 0.04 percent, is appropriate for this case. This is an example of “error budget,” as it is known among designers, where components are selected economically so that the sources of inaccuracies are distributed evenly.
    Converting Gc(s) to Gcd(z) is a matter of numerical integration. There have been many methods suggested, some are too simple and inaccurate (such as the Euler’s forward and backward methods), others are too com- plex. The Tustin’s method suggested above, also known as trapezoidal method or bilinear transformation, is a good compromise. Once the discrete transfer function Gcd(z) is obtained, finding the corresponding difference equation that can be easily programmed in C is straightforward.

    Finally, the presence of the sensor noise usually requires that an antialiasing filter be used in front of the ADC to avoid distortion of the signal in ADC. The phase lag from such a filter must not occur at the crossover frequency (bandwidth) or it will reduce the stability margin or even destabilize the system. This puts yet another
    constraint on the controller design.


    ALTERNATIVE DESIGN METHODS 


    Nonlinear PID
    Using nonlinear PID (NPID) is an alternative to PID for better performance. It maintains the simplicity and intu- ition of PID, but empowers it with nonlinear gains. The need for the integral control is reduced, by making the proportional gain larger, when the error is small.


    Controllability and Observability. Controllability and observability are useful system properties and are defined as follows. Consider an nth order system described by
    x = Ax + Bu, z = Mx
    where A is an n × n matrix. The system is controllable if it is possible to transfer the state to any other state in finite time. This property is important as it measures, for example, the ability of a satellite system to reorient itself to face another part of the earth’s surface using the available thrusters; or to shift the temperature in an industrial oven to a specified temperature. Two equivalent tests for controllability are:
    The system (or the pair (A, B)) is controllable if and only if the controllability matrix C = [B, AB,..., An1B] has full (row) rank n. Equivalently if and only if [siI A, B] has full (row) rank n for all eigenvalues si of A.
    The system is observable if by observing the output and the input over a finite period of time it is possible to deduce the value of the state vector of the system. If, for example, a circuit is observable it may be pos- sible to determine all the voltages across the capacitors and all currents through the inductances by observ- ing the input and output voltages.


    Eigenvalue Assignment Design. Consider the equations: x ̇ = Ax + Bu, y = Cx + Du, and u = p + kx. When the system is controllable, K can be selected to assign the closed-loop eigenvalues to any desired locations (real or complex conjugate) and thus significantly modify the behavior of the open-loop system. Many algo- rithms exist to determine such K. In the case of a single input, there is a convenient formula called Ackermann’s formula
    K = −[0,..., 0, 1] C1 ad(A)
    where C = [B, . . . , An1B] is the n × n controllability matrix and the roots of ad(s) are the desired closed-loop  eigenvalues.

    Refer link below
    https://www3.nd.edu/~pantsakl/Publications/348A-EEHandbook05.pdf

Terms in Control Theory

What are Eigenvalues ?


Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes. Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.[11] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[12] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.

One of the particular values of a certain parameter for which a differential equation or matrix equation has an eigenfunction. In wave mechanics an eigenvalue is equivalent to the energy of a quantum state of a system
  1. 1
    each of a set of values of a parameter for which a differential equation has a non-zero solution (an eigenfunction) under given conditions.
  2. 2
    any number such that a given matrix minus that number times the identity matrix has zero determinan


What is Campbell Diagram & where is it used?

Campbell diagram plot represents a system's response spectrum as a function of its oscillation regime. It is named for Wilfred Campbell, who introduced the concept., also called interference diagram.
In acoustical engineering, the Campbell diagram would represent the pressure spectrum waterfall plot vs the machine's shaft rotation speed.The campbell diagram is used to evaluate the critical speed at different operating speed.

Tuesday, September 13, 2016

What is Noise ?

What is Noise?


Ans from WhatIs.com

Noise is unwanted electrical or electromagnetic energy that degrades the quality of signals and data.  Noise occurs in digital and analog systems, and can affect files and communications of all types, including text, programs, images, audio, and telemetry.
In a hard-wired circuit such as a telephone-line-based Internet hookup, external noise is picked up from appliances in the vicinity, from electrical transformers, from the atmosphere, and even from outer space.  Normally this noise is of little or no consequence.  However, during severe thunderstorms, or in locations were many electrical appliances are in use, external noise can affect communications.  In an Internet hookup it slows down the data transfer rate, because the system must adjust its speed to match conditions on the line.  In a voice telephone conversation, noise rarely sounds like anything other than a faint hissing or rushing.
Noise is a more significant problem in wireless systems than in hard-wired systems. In general, noise originating from outside the system is inversely proportional to the frequency, and directly proportional to the wavelength.  At a low frequency such as 300 kHz, atmospheric and electrical noise are much more severe than at a high frequency like 300 megahertz.  Noise generated inside wireless receivers, known as internal noise, is less dependent on frequency.   Engineers are more concerned about internal noise at high frequencies than at low frequencies, because the less external noise there is, the more significant the internal noise becomes.
Communications engineers are constantly striving to develop better ways to deal with noise.  The traditional method has been to minimize the signal bandwidth to the greatest possible extent.   The less spectrum space a signal occupies, the less noise is passed through the receiving circuitry.  However, reducing the bandwidth limits the maximum speed of the data that can be delivered.  Another, more recently developed scheme for minimizing the effects of noise is called digital signal processing (digital signal processing). Using fiber optics, a technology far less susceptible to noise, is another approach.

Why is noise amplified by numerical differentiation ?

From blog.prosig.com
Why should differentiation be much noisier than integration?  The answer is that differentiation is a subtraction process and at its very basic level we take the difference between two successive values, and then divide by the time between samples. The two adjacent data points are often quite similar in size. Hence the difference is small and will be less accurate, then we divide by what often is a small time difference and this tends to amplify any errors. Integration on the otter hand is addition. As any broadband noise tends to be successively, differently-signed then the noise cancels out.

Thursday, September 1, 2016

Important terms in Control Theory

A control system consists of

Inputs, which are things that we can not only measure, but to which we can assign chosen values (constants or functions of time). Examples: Drug dosages and treatment regimens.

Outputs, which are things that we can measure, but to which we cannot assign values. Examples: Concentrations of administered drug in urine, blood, etc.

States, which are things that affect the outputs, but which cannot even measure because we cannot directly access them. Examples: Concentrations of drug in targeted organ.

Trivial. solution or example that is ridiculously simple and of little interest. Often, solutions or examples involving the number 0 are considered trivial. Nonzero solutions or examples are considered nontrivial.

Causality (also referred to as causation, or cause and effect) is the agency or efficacy that connects one process (the cause) with another process or state (the effect), where the first is understood to be partly responsible for the second, and the second is dependent on the first.

singularity means a point where some property is infinite. For example, at the center of a black hole, according to classical theory, the density is infinite (because a finite mass is compressed to a zero volume). Hence it is a singularity.

Stochastic : For a system to be stochastic, one or more parts of the system has randomness associated with it. Unlike a deterministic system.