|
16 | 16 | \usepackage{pgfplots}
|
17 | 17 | \usepackage{circuitikz}
|
18 | 18 | \usepackage{subcaption}
|
| 19 | +\usepackage{csquotes} |
19 | 20 | \usepackage[yyyymmdd]{datetime}
|
20 | 21 | \usetikzlibrary {arrows.meta}
|
21 | 22 | \pgfplotsset{compat=newest,compat/show suggested version=false}
|
|
138 | 139 |
|
139 | 140 | \section{Digital Signal Processing}
|
140 | 141 |
|
| 142 | +Signal is something that carry information, for example audio signals, video or |
| 143 | +image signals. An important point is that signals can take many equivalent forms |
| 144 | +or representations. For example, a speech signal is produced as an acoustic |
| 145 | +signal, but it can be converted to an electrical signal by a microphone, and |
| 146 | +then to a string of numbers as in digital audio recording. |
| 147 | + |
| 148 | +The term \textbf{system} for our purposes, is something that can manipulate, |
| 149 | +change, record, or transmit signals. For example, a DVD recording stores or |
| 150 | +represents a movie or a music signal as a sequence of numbers. A DVD player is |
| 151 | +a system for converting the numbers stored on the disc (i.e., the numerical |
| 152 | +representation of the signal) to a video and/or acoustic signal. In general, |
| 153 | +systems operate on signals to produce new signals or new signal representations. |
| 154 | +\cite{mcclellan2015dsp} |
| 155 | + |
| 156 | +\subsection{Mathematical Representation} |
| 157 | + |
| 158 | +\textbf{System} |
| 159 | + |
| 160 | +One-dimensional continuous-time system takes an input signal x(t) and produces |
| 161 | +a corresponding output signal y(t). This can be represented mathematically by |
| 162 | + |
| 163 | +\begin{equation} |
| 164 | + y(t) = T\{x(t)\} |
| 165 | +\end{equation} |
| 166 | + |
| 167 | +which means that the input signal (waveform, image, etc.) is operated on by the |
| 168 | +system (symbolized by the operator $T$ ) to produce the output $y(t)$. Consider |
| 169 | +a system such that the output signal is the square of the input signal. The |
| 170 | +mathematical description of this system is |
| 171 | + |
| 172 | +\begin{equation} |
| 173 | + y(t) = [x(t)]^2 |
| 174 | +\end{equation} |
| 175 | + |
| 176 | +The system is a \textit{continuous-time system} (i.e., a system whose input and output |
| 177 | +are continuous-time signals). The discrete version where the signal are quantised is |
| 178 | + |
| 179 | +\begin{equation} |
| 180 | + y[t] = (x[t])^2 |
| 181 | +\end{equation} |
| 182 | + |
| 183 | +The implementation of the discrete- time squarer system would be trivial given |
| 184 | +a digital computer; one simply multiplies each discrete signal value by itself. |
| 185 | + |
| 186 | +In thinking and writing about systems, it is often useful to have a visual |
| 187 | +representation of the system. For this purpose, engineers use \textit{block diagrams} |
| 188 | +to represent operations performed in an implementation of a system and to show |
| 189 | +the interrelations among the many signals that may exist in an implementation of |
| 190 | +a complex system. |
| 191 | + |
| 192 | +\subsection{Sinusoids} |
| 193 | + |
| 194 | +General class of signals that are commonly called cosine signals or, |
| 195 | +equivalently, sine signals, which are also commonly referred to as cosine or |
| 196 | +sine waves, particularly when speaking about acoustic or electrical signals. |
| 197 | +Collectively, such signals are called sinusoidal signals or, more concisely, |
| 198 | +sinusoids. Sinusoidal signals are the basic building blocks in the theory of |
| 199 | +signals and systems. |
| 200 | + |
| 201 | +\begin{equation} |
| 202 | +x(t) = A \cos (\omega_0 t + \varphi) |
| 203 | +\end{equation} |
| 204 | + |
| 205 | +We define a continuous-time signal with independent variable $t$, a continuous |
| 206 | +real variable that represents time. $A$ is the \emph{amplitute}, $\omega_0$ the |
| 207 | +\textit{radian frequency}, and $\varphi$ the \emph{phase} of the cosine signal. |
| 208 | + |
| 209 | +\subsection{Sampling and Aliasing} |
| 210 | + |
| 211 | +Continuous waveforms like sinusoidal signals must be turned into vectors, or |
| 212 | +stream of numbers for digital signal processing. The computer sample values of |
| 213 | +the continuous-time signal at a constant rate such as. 48 000 samples/s. How |
| 214 | +many numbers per second are needed to adequately represent a continuous-time |
| 215 | +signal. The question boils down to finding the minimum rate for the |
| 216 | +constant-rate sampling. |
| 217 | + |
| 218 | +\begin{quote} |
| 219 | +The primary objective of our presentation is an understanding of the |
| 220 | +\emph{sampling theorem}, which states that when the \emph{sampling rate} is |
| 221 | +\emph{greater than twice} the \emph{highest frequency} contained in the spectrum |
| 222 | +of the \emph{analog signal}, the \emph{original signal} can be |
| 223 | +\emph{reconstructed exactly} from the samples. |
| 224 | +\end{quote} |
| 225 | + |
| 226 | +A signal whose spectrum has a finite highest frequency is called a |
| 227 | +\textit{bandlimited} signal, and theoretically, such a signal can be sampled and |
| 228 | +reconstructed without error. The reconstruction process must ''fill in'' the |
| 229 | +missing signal values between the sample times tn by constructing a smooth curve |
| 230 | +through the discrete-time sample values $x(t_n)$. Mathematicians call this |
| 231 | +process \textit{interpolation} because it may be represented as time-domain |
| 232 | +interpolation formula. |
| 233 | + |
| 234 | +\subsubsection{Sampling} |
| 235 | + |
| 236 | +A \emph{discrete-time signal} is represented mathematically by an indexed |
| 237 | +sequence of numbers. The numbers are stored digitally, and the signal values are |
| 238 | +held in memory locations, so they would be indexed by memory address. Values of |
| 239 | +the discrete-time signal are denoted as $x[n]$, where $n$ is the integer index |
| 240 | +indicating the order of the values in the sequence. The square brackets ``[ ]'' |
| 241 | +enclosing the argument $n$ provide a notation that distinguishes between the |
| 242 | +continuous-time signal $x(t)$ and a corresponding discrete-time signal $x[n]$. |
| 243 | + |
| 244 | +Sample continuous-time signal at equally spaced time instants, $t_n = nT_s$: |
| 245 | + |
| 246 | +\begin{equation} |
| 247 | + x[n] = x(nT_s), \qquad -\infty < n < \infty |
| 248 | +\end{equation} |
| 249 | + |
| 250 | +where $x(t)$ represents any continuously varying signal such as audio. The fixed |
| 251 | +time interval between samples, $T_s$, can also be expressed as a fixed |
| 252 | +\emph{sampling rate}, $f_s$ in samples/s: |
| 253 | + |
| 254 | +\begin{equation} |
| 255 | + f_s = \frac{1}{T_s} \quad \mathrm{samples/s} |
| 256 | +\end{equation} |
| 257 | + |
| 258 | +Therefore, an alternative way to write the sequence is: |
| 259 | + |
| 260 | +\begin{equation} |
| 261 | + x[n] = x\left(\frac{n}{f_s}\right) |
| 262 | +\end{equation} |
| 263 | + |
| 264 | +\subsubsection{Sampling Sinusoidal Signals} |
| 265 | + |
| 266 | +Sample $A \cos (\omega t + \varphi)$: |
| 267 | + |
| 268 | +\begin{align} |
| 269 | +x[n] &= x(nT_s) \\ |
| 270 | + &= A \cos (\omega n T_s + \varphi) \\ |
| 271 | + &= A \cos (\hat{\omega} n + \varphi) |
| 272 | +\end{align} |
| 273 | + |
| 274 | +where $\hat{\omega}$ is defined as: |
| 275 | + |
| 276 | +\begin{equation} |
| 277 | +\hat{\omega} = \omega T_s = \frac{\omega}{f_s} |
| 278 | +\end{equation} |
| 279 | + |
| 280 | +The signal $x[n]$ is a \emph{discrete-time cosine signal}, and $\hat{\omega}$ is |
| 281 | +its \emph{discrete-time frequency}. The ''hat'' is used to denote that this is a |
| 282 | +new frequency variable. It is a \emph{normalized} version of the continuous-time |
| 283 | +radian frequency with respect to the sampling frequency. Since $\omega$ has |
| 284 | +units of rad/s, the units of $\hat{\omega} = \omega T_s$ are radians, that is, |
| 285 | +$\hat{\omega}$ is a dimensionless quantity. This is entirely consistent with the |
| 286 | +fact that the index $n$ in $x[n]$ is dimensionless. |
| 287 | + |
| 288 | +\subsubsection{The concept of Aliases} |
| 289 | + |
| 290 | +A simple definition of the word alias would involve something like ''two names |
| 291 | +for the same person, or thing.'' When a mathematical formula defines a signal, |
| 292 | +that formula can act as a name for the signal. |
| 293 | + |
| 294 | +\subsubsection{Shannon Sampling Theorem} |
| 295 | + |
| 296 | +\begin{quote} |
| 297 | +A continuous-time signal $x(t)$ with frequencies no higher than |
| 298 | +$f_{\mathrm{max}}$ can be reconstructed exactly from its samples |
| 299 | +$x[n] = x(nT_s)$, if the samples are taken at a rate $f_s = \frac{1}{T_s}$ that |
| 300 | +is greater than $2f_{\mathrm{max}}$. |
| 301 | +\end{quote} |
| 302 | + |
| 303 | +The minimum sampling rate of $2f_{\mathrm{max}}$ is called the Nyquist rate (Nyquist limit). |
| 304 | + |
| 305 | +\subsection{FIR Filters} |
| 306 | + |
| 307 | +A filter is a system that is designed to remove some component or modify some |
| 308 | +characteristic of a signal. The \emph{finite impulse repsonse (FIR)} systems or |
| 309 | +FIR \emph{filters} are systems for which each output value is the sum of a |
| 310 | +finite number of weighted values of the input sequence. We define the basic |
| 311 | +input-output stucture of the FIR filter as time-domain computation based upon |
| 312 | +what is often called \emph{difference equation}. The unit impulse reponse of the |
| 313 | +filter is defined and shown to completely descibe the filer via the operation |
| 314 | +of convolution. |
| 315 | + |
| 316 | +\subsubsection{The Running-Average Filter} |
| 317 | + |
| 318 | +A simple but useful transformation of a discrete-time signal is to compute a |
| 319 | +\emph{running average} of two or more consecutive values of the sequence, |
| 320 | +thereby forming a new sequence of the average values. The \textbf{FIR} filter is |
| 321 | +a generalisation of the idea of running average. |
| 322 | + |
| 323 | +\subsubsection{Frequency Response of FIR Filters} |
| 324 | + |
| 325 | +The concept of the frequency response of an LTI FIR filter and show that the |
| 326 | +\emph{frequency response} and impulse response are uniquely related. All LTI |
| 327 | +systems possess this sinusoid-in gives \emph{sinusoid-out property}. The |
| 328 | +frequency-response function, when plotted over all frequencies, summarizes the |
| 329 | +response of an LTI system by giving the magnitude and phase change experienced |
| 330 | +by all possible sinusoids. |
| 331 | + |
| 332 | +\begin{equation} |
| 333 | + H(e^{j\hat{\omega}}) = \sum_{k = 0}^{M} b_k e^{-j \hat{\omega} k} = \sum_{k = 0}^{M} h[k] e^{-j \hat{\omega} k} |
| 334 | +\end{equation} |
| 335 | + |
| 336 | +\subsection{Discrete-Time Fourier Transform} |
| 337 | + |
| 338 | +General concept of the discrete-time Fourier transform (DTFT) to the impulse |
| 339 | +response of the LTI(\emph{Linear-Time Invariant}) system. |
| 340 | + |
| 341 | +\begin{equation} |
| 342 | + X(e^{j\hat{\omega}}) = \sum_{n = -\infty}^{\infty} x[n]e^{-j\hat{\omega}n} |
| 343 | +\end{equation} |
| 344 | + |
| 345 | +\subsubsection{Discrete Fourier Transform (DFT)} |
| 346 | + |
| 347 | +The objective here is to define a numerical Fourier transform called the |
| 348 | +discrete Fourier transform (or DFT) that results from taking frequency samples |
| 349 | +of the DTFT. |
| 350 | + |
| 351 | +\begin{equation} |
| 352 | + X[k] = \sum_{n=0}^{N - 1} x[n] e^{-j(2\pi / N)kn} |
| 353 | +\end{equation} |
| 354 | + |
| 355 | +\begin{equation} |
| 356 | + k = 0,\, 1,\, \dots,\, N - 1 |
| 357 | +\end{equation} |
| 358 | + |
| 359 | +The \emph{Discrete Fourier Transform (DFT)} takes $N$ samples in the time domain |
| 360 | +and transforms them into $N$ values $X[k]$ in the \emph{frequency domain}. |
| 361 | +Typically, the values of $X[k]$ are \emph{complex}, while the values of $x[n]$ |
| 362 | +are often \emph{real}, but $x[n]$ could also be complex. |
| 363 | + |
| 364 | +\subsection{Z-Transforms} |
| 365 | + |
| 366 | +Polynomials and rational functions play a significant role in the analysis of |
| 367 | +linear discrete-time systems. The key result is that FIR convolution is |
| 368 | +equivalent to polynomial multiplication. Common algebraic operations, such as |
| 369 | +\emph{multiplying}, \emph{dividing}, and \emph{factoring polynomials}, can be |
| 370 | +interpreted as \textbf{combining} or \textbf{decomposing} LTI systems. |
| 371 | + |
| 372 | +\subsection{Definition of $z$-Transforms} |
| 373 | + |
| 374 | +For a finite-length signal $x[n]$ with a set of signal values |
| 375 | +$\{x[0], x[1],\ \dots,\ x[L - 1]\}$, the signal can be expressed as: |
| 376 | + |
| 377 | +\begin{equation} |
| 378 | + x[n] = \sum_{k = 0}^{L - 1} x[k] \delta[n - k] |
| 379 | +\end{equation} |
| 380 | + |
| 381 | +Each term in the summation, $x[k] \delta[n - k]$, represents the value $x[k]$ at |
| 382 | +the time index $n = k$, which is the only index where $\delta[n - k]$ is nonzero. |
| 383 | + |
| 384 | +The $z$-transform of the signal $x[n]$ is defined by the formula: |
| 385 | + |
| 386 | +\begin{equation} |
| 387 | + X(z) = \sum_{k = 0}^{L - 1} x[k]z^{-k} |
| 388 | +\end{equation} |
| 389 | + |
| 390 | +Here, $z$, the independent variable of the $z$-transform $X(z)$, is a complex |
| 391 | +number. In this equation, the signal values $\{x[0], x[1],\ \dots,\ x[L - 1]\}$ |
| 392 | +are used as coefficients of a polynomial in $z^{-1}$. The exponent of $z^{-k}$ |
| 393 | +indicates that the polynomial coefficient $x[k]$ corresponds to the |
| 394 | +$k^{\text{th}}$ value of the signal. |
| 395 | + |
| 396 | +Although this is the conventional definition of the $z$-transform, it is |
| 397 | +instructive to write $X(z)$ in the form: |
| 398 | + |
| 399 | +\begin{equation} |
| 400 | + X(z) = \sum_{k = 0}^{L - 1} x[k](z^{-1})^{k} |
| 401 | +\end{equation} |
| 402 | + |
| 403 | +This emphasizes that $X(z)$ is simply a polynomial of degree $L - 1$ in the |
| 404 | +variable $z^{-1}$. |
| 405 | + |
| 406 | +\subsubsection{$z$-Transform of the Impulse Response} |
| 407 | + |
| 408 | +The second way to obtain a $z$-domain representation of an FIR filter is to take |
| 409 | +the $z$-transform of the impulse response: |
| 410 | + |
| 411 | +\begin{equation} |
| 412 | + H(z) = \sum_{k = 0}^{M} h[k]z^{-k} |
| 413 | +\end{equation} |
| 414 | + |
141 | 415 | \newpage
|
142 | 416 | \printbibliography
|
143 | 417 | \end{document}
|
0 commit comments