### IT2302- INFORMATION THEORY AND CODING Two Marks Questions With Answers

Anna University, Chennai

IT2302- INFORMATION THEORY AND CODING UNIT – I

1. What is prefix coding?

Prefix coding is variable length coding algorithm. In assigns binary digits to the messages as per their probabilities of occurrence. Prefix of the codeword means any sequence which is initial part of the codeword. In prefix code, no codeword is the prefix of any other codeword.

2. State the channel coding theorem for a discrete memoryless channel.

Given a source of „M‟ equally likely messages, with M>>1, which is generating information t a rate R. Given channel with capacity C. Then if,

R ≤ C

There exists a coding technique such that the output of the source may be transmitted over the channel with probability of error in the received message which may be made arbitrarily small.

3. Explain channel capacity theorem.

The channel capacity of the discrete memory less channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities P(xi)

C = B log2(1+S/N) bits/sec

Here B is channel bandwidth.

4. Define channel capacity of the discrete memoryless channel.

The channel capacity of the discrete memoryless channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities

C = max I(X:Y) P(xi)

5. Define mutual information.

The mutual information is defined as the amount of information transferred when xi is transmitted and yi is received.it is represented by I (xi,yi) and given

P(xi)

6. state its two properties of mutual information

Ø

The mutual information is symmetric.

I(X;Y) = I(X:Y)

Ø

The mutual information is always positive

I(X;Y) ≥ 0

7. Define efficiency of the source encoder.

Efficiency of the source encoder is given as,

ή = Entropy (H) .

Avg. no. of bits in codeword(N)

8. Define code redundancy.

It is the measure of redundancy of bits in the encoded message sequence. It is given as,

Redundancy = 1 – code efficiency

= 1 –

ή It should be as low as possible.

9. Define rate of information transmission across the channel.

Rate of information transmission across the channel is given as, Dt = [H(X) – H(X/Y)]r bits/sec

Here H(X) is the entropy of the source.

H(X/Y) is the conditional entropy.

10. Define bandwidth efficiency.

The ratio of channel capacity to bandwidth is called bandwidth efficiency Bandwidth efficiency = channel capacity (C)

Bandwidth (B)

11. What is the capacity of the channel having infinite bandwidth?

The capacity of such channel is given as, C = 1.44 (S/N0)

Here S/N0 is signal to noise ratio

12. Define a discrete memoryless channel.

For the discrete memoryless channels, input and output, both are discrete random variables. The current output depends only upon current input for such channel.

13. Find entropy of a source emitting symbols x, y, z with probabilities of 1/5, 1/2, 1/3 respectively.

p1 = 1/5, p2 = 1/2, p3 = 1/3. H k log 2 (1/pk)

= 1/5 log25 + 1/2 log22 +1/3 log23

= 1.497 bits/symbol

14. An alphabet set contains 3 letters A,B, C transmitted with probabilities of 1/3, ¼, 1/4. Find entropy.

p1 = 1/3, p2 = 1/4, p3 = 1/4. H k log 2 (1/pk)

= 1/3 log23 + 1/4 log2 4 +1/4 log24

= 1.52832 bits/symbol

15. Define information

Amount of information : Ik = log2 (1/pk)

16. Write the properties of information

Ø If there is more uncertainty about the message, information carried is also more.

Ø If receiver knows the message being transmitted, the amount of information carried is zero.

Ø

If I1 is the information carried by message m1, and I2 is the in formation carried by m2, then amount of information carried compontely due to m1 and m2 is I1+I2

17. Calculate the amount of information if pk = ¼

Amount of information : Ik = log2 (1/pk)

= log10 4

Log10 2

= 2 bits

18. What is entropy?

Average information is represented by entropy. It is represented by H.

H k log 2 (1/pk)

19. Properties of entropy:

Ø

Entropy is zero if the event is sure or it is impossible

H = 0 if pk = 0 or 1

Ø

When pk = 1/M for all the „M ‟ symbols, then the symbols are equally likely for such source entropy is given as H = log2M

Ø

Upper bound on entropy is given as,

Hmax = log2M

20. Define code variance

Variance is the measure of variability in codeword lengths. It should be as small as possible.

pk – probability of Kth symbol

nk – no. of bits assigned to Kth symbol N – avg. codeword length

UNIT-II

1. Define Nyquist rate.

Let the signal be band limited to „W‟ Hz. Then Nyquist rate is given as,

Nyquist rate = 2W samples/sec

Aliasing will not take place if sampling rate is greater than Nyquist rate.

2. What is meant by aliasing effect?

Aliasing effect takes place when sampling frequency is less than nyquist rate. Under such condition, the spectrum of the sampled signal overlaps with itself. Hence higher frequencies take the form of lower frequencies. This interference of the frequency components is called aliasing effect.

3. What is PWM refers.

PWM is basically pulse width modulation. Width of the pulse changes according to amplitude of the modulating signal, it is also referred as pulse duration modulation or PDM.

4. State sampling theorem.

Sampling theorem states that, a band limited signal of finite energy, which has no frequency components higher than W Hz, is completely described by specifying the values of the signal at instants of time separated by 1/2W seconds and ,

A band limited signal of finite energy, which has no frequency components higher than W Hz, may be completely recovered from the knowledge of its samples taken at the rate of 2W samples per second.

5. Mention two merits of DPCM.

Ø Bandwidth requirement of DPCM is less compared to PCM.

Ø

Quantization error is reduced because of perdition filter.

6. What is the main difference in DPCM and DM?

DM encodes the input sample by only one bit. It sends the information about +∂ or -∂, ie. Step rise or fall. DPCM can have more than one bit for encoding the sample, it sends the information about difference between actual sample value and predicted sample value.

7. How the message can be recovered from PAM?

The message can be recovered from PAM by passing the PAM signal through reconstruction filter. The reconstruction filter integrates amplitudes of PAM pulses. Amplitude smoothing of the reconstructed signal is done to remove amplitude discontinuities due to pulses.

8. Write an expressing for bandwidth of binary PCM with N message each with a maximum frequency of fm Hz.

If „v‟ number of bits are used to code each input sample, then bandwidth of PCM is given as,

BT ≥ N. v . fm

Here v. fm is the bandwidth required by one message.

9. How is PDM wave converted into PPM systems?

The PDM signal is given as a clock signal to monostable multivibrator. The multivibrator triggers on falling edge. Hence a PPM pulse of fixed width is produced after falling edge of PDM pulse. PDM represents the input signal amplitude in the form of width of the pulse. A PPM pulse is produced after this ‟width‟ of PDM pulse. In other words, the position of the PPM pulse depends upon input signal amplitude.

10. Mention the use of adaptive quantizer in adaptive digital waveform coding schemes.

Adaptive quenatizer changes its step size accordion to variance of the input signal. Hence quantization error is significantly reduces due to adaptive quantization. ADPCM uses adaptive quantization. The bit rate of such schemes is reduces due to adaptive quantization.

11. What do you understand from adaptive coding?

In adaptive coding, the quantization step size and prediction filter coefficients are changed as per properties of input signal. This reduces the quantization error and number of bits used to represent the sample value. Adaptive coding is used for speech coding at low bit rates.

12. What is meant by quantization?

While converting the signal value from analog to digital, quantization is performed. The analog value is assigned to the nearest digital level. This is called quantization. The quantized value is then converted to equivalent binary value. The quantization levels are fixed depending upon the number of bits. Quantization is performed in every analog to digital conversion.

13. The signal to quantization noise ratio in a PCM system depends on ……

The signal to quantization noise ratio in PCM is given as, (S/N)dB ≤ (4.8+6v) dB

Here v is the number of bits used to represent samples in PCM. Hence signal to quantization noise ration in PCM depends upon number of bits or quantization levels.

14. For the transmission of normal speech signal in the PCM channel needs the B.W. of…..

Speech signals have the maximum frequency of 3.4 kHz. Normally 8 bits PCM is used for speed. The transmission bandwidth of PCM is given as,

BT ≥ vW

≥ 8 x 3.4 kHz

≥ 27.2 kHz.

15. It is required to transmit speech over PCM channel with 8-bit accuracy. Assume the speech in baseband limited to 3.6 kHz. Determine the bit rate?

The signaling rate in PCM is given as,

R= v fs

Here v number of bits ie. 8

The maximum signal frequency is W= 3.6 kHz. Hence minimum sampling frequency will be, fs = 2W

= 2 x3.6 kHz

= 7.2 kHz.

R = 8x7.2x103

= 57.6 kbits/sec

16. What is meant by adaptive delta modulation?

In adaptive delta modulation, the step size is adjusted as per the slope of the input signal. Step size is made high if slope of the input signal is high. This avoids slope overload distortion.

17. Delta modulation of delta modulation over pulse modulation schemes?

Delta modulation encodes one bit per sample. Hence signaling rate is reduced in DM.

18. What should be the minimum bandwidth required to transmit a PCM channel?

The minimum transmission bandwidth in PCM is given as,

BT = vW

Here v is number of bits used to represent on pulse.

W is the maximum signal frequency.

19. What is the advantage of delta modulation over PCM?

Delta modulation uses one bit to encode one bit to encode one sample. Hence bit rate of delta modulation is low compared to PCM.

20. How distortions are overcome in AXM?

Ø The slope overload and granular noise occur mainly because of fixed step size in delta modulator.

Ø

Step size is more for fast amplitude changes and step size is less for slowly varying amplitude.

Ø

The step size is varied according to amplitude variations of input signal

UNIT – III Error Control Coding

1. What is hamming distance?

The hamming distance between the two code vectors is equal to the number of elements in which they differ. For example, let the two code words be,

X = (101) and Y = (110)

These two code words differ in second and third bits. Therefore the banning distance between X

and Y is two.

2. Define code efficiency?

The code efficiency is the ratio of message bits in a block to the transmitted bits for that block by the encoder ie.

Code efficiency = message bits = k

3. What is meant by systematic and non systematic codes?

In a systematic block code, message bits appear first and then check bits. In the nonsystematic code, message and check bits cannot be identified in the code vector.

4. What is meant by linear code?

A code is linear if modulo-2 sum of any two code vectors produces another code vector. This means any code vector can be expressed as linear combination of other code vectors.

5. What are the error detection and correction capabilities of Hamming codes?

The minimum distance (dmin) of Hamming codes is „3‟. Hence it can be used to detect double

errors or correct single errors. Hamming codes are basically linear block codes with dmin = 3

6. What is meant by cyclic code?

Cyclic codes are the subclass of linear block codes. They have the properly that a cyclic shift of one codeword produces another code word. For example consider the codeword.

X = (xn-1,xn-2,……x1,x0)

Let us shift above codevector to left cyclically,

X‟ = (xn-2,xn-3,…… x0, x1,xn-1) Above codevector is also a valid codevector.

7. How syndrome is calculated in Hamming codes and cyclic codes?

In Hamming codes the syndrome is calculated as,

T

YH

Here Y is the received and HT is the transpose of parity check

matrix. In cyclic code, the syndrome vector polynomial is given as.

S(p) = rem [Y(p)/G(p)]

Here Y(p) is received vector polynomial and G(p) is generator polynomial.

8. What is BCH code?

BCH codes are most extensive and powerful error correcting cyclic codes. The decoding of BCH codes is comparatively simples. For any positive integer „m‟ and „t‟, there exists a BCH code with following parameters:

Block length : n = 2m-1

Number of parity check bits : n-k ≤ mt

Minimum distance : dmin ≥ 2t + 1

9. What is RS code?

These are non binary BCH codes. The encoder for RS codes operates on multiple bits simultaneously. The (n,k) RS code takes the groups of m-bit symbols of the incoming binary data stream. It takes such „k‟ number of symbols in one block. Then the encoder adds (n-k) redundant symbols to form the code word of „n‟ symbols.

RS code has :

Block length : n = 2m-1

Message size : k symbols

Number of parity check bits : n-k= 2t

Minimum distance : dmin = 2t + 1

10. What is the difference between block codes and convolutional codes?

Block codes take „k‟ number of message bit simultaneously and form „n‟ bit code vector is also

called block. Convolutional code takes one message bit at a time and generates

two or more encoded bits. Thus convoutional codes generate a string.

11. Define constraint length in convolutional codes.

Constraint length is the number of shifts over which the single message bit can influence the encoder output. It is expressed in terms of message bits.

12. Define free distance and coding gain.

Free distance is the minimum distance between code vectors. It is also equal to minimum weight of the code vectors.

Coding gain is used as a basis of comparison for different coding methods. To achieve the same bit

error rate the coding gain is defined as,

A = [Eb/No] encoded

[Eb/No] coded

13. Why cyclic codes are extremely well suited for error detection?

Ø They are easy to encode

Ø

They have well defined mathematical structure. Therefore efficient decoding schemes are available.

14. What is syndrome?

Syndrome gives an indication of errors present in received vector „Y‟. If YHT = 0, then there are

no errors in „Y‟ and it is valid code vector. The non zero value of YHT is called „syndrome‟. Its

non zero value indicates that „Y‟ is not a valid code vector and it contains errors.

15. Define dual code.

Let there be (n,k) block code. It satisfies HGT = 0. Then the (n,n-k) i.e. (n,q) block code is called

dual code. For every (n,k) block code, there exists a dual code of size (n,q).

16. Write syndrome properties of liner block codes.

Ø Syndrome is obtained by S = YHT .

Ø

If Y = X, then S = 0 ie. No error in output.

Ø If Y ≠ X, then S ≠ 0 ie. There is an error in output.

Ø T

Syndrome depends upon the error pattern only, ie. S = EH

17. What is Hamming code? Write its conditions

Hamming codes are (n,k) liner block codes with following conditions:

Ø

Number of check bits q ≥ 3

Ø Block length n = 2q – 1

Ø

Number of message bits k = n – q

Ø Minimum distance dmin = 3

18. List the properties of generator polynomial of cyclic codes.

Ø Generator polynomial is a factor x(p) and (pn+1)

Ø

Code polynomial, message polynomial and generator polynomial are related by, X(p) = M(p) G(p)

Ø

Generator polynomial is of degree „q‟

The hadamard code is derived form hadamard matrix. The hadamard matrix is the n x n square matrix. Rows of this hadamard matrix represent code vectors. Thus a n x n hadamard matrix, represents „n‟ codes vector of „n‟ bits each. If the bolock of message vector contains „k‟ bits, then

N = 2k

20. Write the advantage of extended code

This code can detect more number of errors compared to normal (n,k) block code. But it cannot used in error correction.

UNIT – IV

1. State the main application of Graphics Interchange Format(GIF)

The GIF format is used mainly with internet to represent and compress graphical images. GIF images can be transmitted and stored over the network in interlaced mode, this is very useful when images are transmitted over low bit rate channels.

2. Explain Runlength encoding.

The runlength encoding is siplest lossless encoding techniques. It is mainly used to compress text or digitized documents. Binary data strings are better compressed by runlength encoding. Consider the binary data string

1111110000011111……….

If we apply runlength coding to abouve data string, we get,

7,1; 6,0; 5,1; 3,0……

Thus there are seven binary 1‟s, followed by six binary 0‟s followed by five binary 1‟s and so on.

3. What is JPEG standard?

JPEG stands for joint photographic exports group. This has developed a standard for compression of monochrome/color still photographs and images. This compression standard is known as JPEG standard. It is also know as ISO standard 10918. It provides the compression ratios upto 100:1.

4. Why differential encoding is carried out only for DC coefficient in JPEG?

Ø

The DC coefficient represents average color/luminance/ chrominance in the corresponding block. Therefore it

is the largest coefficient in the block.

Ø

Very small physical area is covered by each block. Hence the DC suitable compressions for DC coefficients

do not vary much from one block to next block.

Ø

The DC coefficient vary slowly. Hence differential encoding is best suitable compression for

DC coef ficients. It encodes the difference between each pair of values rather than their absolute

values.

5. What do you understand by “GIF interlaced node”.

The image data can be stored and transferred over the network in an interlaced mode. The data is stored in such a way that the decompressed image is built up in progressive way.

6. Explain in brief “spatial frequency” with the aid of a diagram.

The rate of change of pixel magnitude along the scanning line is called spatial frequency.

7. Write the advantages of Data compression

Ø Huge amount of data is generated in text, images, audio, speech and video.

Ø Because of compression, transmission data rate is reduced.

Ø

Storage becomes le ss due to compression. Due to video compression, it is possible to store one complete movie on two CDs.

Ø

Transportation of the data is easier due to compression.

8. Write the drawbacks of Data compression

Ø Due to compression, some of the data is lost.

Ø

Compression and decompression increases complexity of the transmitter and receiver.

Ø

Coding time is increased due to compression and decompression.

9. Compare lossless and lossy compression

 S.No. Lossless compression Lossy compression 1. No information is lost Some information is lost 2. Completely reversible It is not reversible 3. Used for text and data Used for speed and video 4. Compression ratio is less High compression ratio 5. Compression is independent of human response Compression depends upon sensitivity of human ear, eyes…

10. Compare static coding and dynamic coding

 S.No. Static coding Dynamic coding 1. Codewords are fixed Codewords change dynamically throughout compression during compression

2. Statistical characteristics of Statistical characteristics of the the data are known data are not known

knows the

codewords the codewords

4. Ex: Static Huffman coding Ex: Dynamic Huffman coding

11. Write the principle of static Huffman coding

In static Huffman coding, the character string to be transmitted is analyzed. The frequency of occurrence of each character is determined. The variable length codewords are then assigned to each character. The coding operation creates an unbalanced tree. It is also called Huffman coding tree.

12. How arithmetic coding is advantages over Huffman coding for text compression?

S.No. arithmetic coding Huffman coding

1. Codes for the characters are Coding is done for messages of derived. short lengths 2. Shannon‟s rate is achieved only Shannon‟s rate is

always

if character probabilities are all achieved irrespective of

integer powers of 1/2 probabilities of characters 3. Precision of the computer does Precision of the computer not affect coding determine length of the

string

that can by

encoded 4. Huffman coding is the simple Arithmetic coding is technique complicated

13. Define compression

Large amount of data is generated in the form of text, images, audio, speech and video.

14. What is the principle of data compression

Compression Decompression

15. What are the types of compression

The compression can be of two types: Lossless compression and lossy compression

Lossless compression : In compression, no part of the original information is lost during compression. Decompression produces original information exactly

Lossy compression: In lossy compression some information is lost duration compression. Hence decompression does not produce original information exactly.

16. What are „make-up codes‟ and termination codes in digitization of documents?

Make-up codes and termination codes gives codeword for contiguous white and black pels along the scanned line.

Termination code: there codes give codewords for black and white runlengths from 0 to 63 in steps of 1 pel.

Make-up codes: these codes give codewords for black and white runlengths that are multiples of

64 pels.

17. What are JPEG standards.

The JPEG stand for Joints Photographic Exports Group(JPEG). This group is working on international compression standard for colour and monochrome continuous tone still images, photographs etc. this group came up with a compression standard, which is widely known as JPEG standard. It is also known as ISO standard 10918.

18. What are the types of JPEG algorithms

There are two types of JPEG algorithms.

Baseline JPEG: During decoding, this algorithm draws line until complete image is shown. Progressive JPEG: During decoding, this JPEG algorithm draws the whole image at once, but in very poor quality. Then another layer of data is added over the previous image to improve its quality. Progressive JPEG is used for images on the web.the used can make out the image before it is fully downloaded.

19. Draw the block diagram of JPEG encoder

Source image

Block and Entropy

image DCT quantization encoding

Encoded image Frame data(JPEG) building

20. What type of encoding techniques is applied to AC and DC co- efficient in JPEG?

Ø

The DC coefficients have normally large amplitudes. They vary slowly from block to block. Differential encoding b ecomes very efficient for such data. It encodes only the difference among the coefficients.

Ø

The AC coefficients are remaining 63 coeff icients in each block. They are fast varying. Hence runlength encoding proves to be efficient for such data.

UNIT – V

1. What is dolby AC-1?

Dolby AC-1 is used for audio coding. It is MPEG audio coding standard. It uses psychoacoustic model at the encoder and has fixed bit allocations to each subband.

2. What is the need of MIDI standard?

The MIDI stands for musical instrument digital interface(MIDI). It normally specifies the details of digital interface of various musical instruments to micro-computer. It is essential to access, record or store the music generated from musical instruments.

3. What is perceptual coding?

In perceptual coding only perceptual feature of the sound are stored. This gives high degree of compression. Human ear is not sensitive to all frequencies equally. Similarly masking of weaker signal takes place when louder signal to present nearby. These parameters are used in perceptual coding.

4. Explain CELP principles.

Ø CELP uses more sophisticated model of vocal tract.

Ø

Standard audio segments are sto red as waveform templates. Encoder and decoder both have same set of templates. It is called codebook.

Ø Every digitized segment is compared with waveform templates in code book.

Ø

The matching template is differentially encoded and transmitted.

Ø

At the receiver, the differentially encoded codeword selects the matching template from codebook.

5. What is significance of D- frames in video coding

Ø

The D – frames are inserted at regular intervals in the encoded sequence of frames. These are highly

compressed and they are ignored during decoding „p‟ and „B‟ frames.

Ø

The D- frames consists of only DC coefficients and hence they generate low resolution picture.

Ø

The low resolution pictures generated by D – frames are useful in fast forward and rewind applications.

6. Define the terms “GOP” and “Prediction span” with reference to video compression.

GOP (Group of Picture): the number of fames or pictures between two successive I-frames is called group of picture or GOP. Typical value of GOP varies from 3 to 12.

Prediction Span: The number of frames between a P-frame and the immediately preceding I or P

frame is called prediction span. The typical value of prediction span lies from 1 to 3.

7. Define the terms “processing delay” and “algorithmic delay” with respect to speech coders.

Processing delay: It is the combined time required for (i) analyzing each block of digitalized samples at encoder and (ii) reconstruction of speech at decoder.

Algorithmic delay: It is the time required to accumulated each block of samples in the memory.

8. What do you understand by frequency masking?

The strong signal reduces level of sensitivity of human ear to other signals which are near to it in frequency. This effect is called frequency masking.

9. Find the average compression ratio of the GOP which has a frame sequence IBBPBBPBBPBB where the individual compression ratios of I, P and B are 10: 1, 20: 1, 50: 1 respectively.

There are total 12 frames of which I-frames are 1, P-frames are 3 and B-frames are 8. Hence average compression ratio will be,

Avg CR = (1x(1/10)) + (3x(1/20)) + (8x(1/50))

12

= 0.0342

10. What is perceptual coding?

In perceptual coding, the limitations of human ear are exploited. We know that human ear can listen very small sound when there is complete silence. But if other big sounds are present, then human ear can not listen very small sounds. These characteristics of human ear are used in perceptual coding. The strong signal reduces level of sensitivity of the ear to other signals which are near to it in frequency. This effect is called frequency masking.

11. What is code excited LPC?

The code excited LPC uses more sophisticated model of a vocal tract. Therefor the generated sound is more nature. The sophisticated version of vocal ract is known as code excited linear prediction (CELP) model.

12. Define pitch and period.

Pitch: The pitch of the signal gives an information about fundamental frequency. Pitch of every person is different. However it is in the similar range for males and some another rage for females. Period: This is the time duration of the signal. It is also one of the important feature.

13. List the application of LPC.

Ø Since the generated sound is very synthetic, it is used mainly for military purposes.

Ø

LPC synthesis is used in applications which require very small bandwidth.

14. List the four international standards based on CELP.

They are ITU-T recommendations G.728, G.729, G.729(A) and G.723.1

15. What is meant by temporal masking?

When ear hears the loud sound, certain time has to be passed before it hears quieter sound. This is called temporal masking.

16. What is MPEG?

MPEG stands for Motion Pictures Expert Group(MPEG). It was formed by ISO. MPEG has developed the standards for compression of video with audio. MPEG audio coders are used for compression of audio. This compression mainly uses perceptual coding.

17. Draw the frame format in MPEG audio encoder.

Masking ratio and bit Bit allocations thresholds allocations

Audio PCM

signal Encoder

Quantizer

Analysis Quantizer Frame Encode

filter band

2

Quantizer

3

conversion s

dMPEG

audio

18. Write the advantages and applications of Dolby AC-1.

Ø Simple encoding scheme du to fixed bit allocations

Ø

Reduced compressed bit rate since fram es does not include bit allocations. The typical compressed bit rate is

512 kbps for two channel stereo signal.

Applications:

Ø It is used in satellites for FM radio.

Ø

It is also used for compression of sound associated with TV programs.

Ø Hit allocations are not transmitted in the frame.

Ø

Bit rate of encoded audio is higher than MPEG audio coding.

Ø

Complexity is more since psychoacoustic model and spectral envelope encoder/decoders and used.

Ø

Subband samples are encoded and transmitted in the frame. Hence bit rate of compressed data is slightly

reduced.

Ø

It cannot be used for broadcast applications since encoder and decoder both contain psychoacoustic model. Therefore encoder cannot be modifier easily.

20. Define I,P and B frames.

I-frames: It is also known as intracoded frame. It is normally the first frame in the new scene. P- frames: It is also known as predictive frame. It basically predicts the movement of objects with respect to I-frame.

B-frame: It is also known as bidirectional frame. These frames relate the motion of the objects in preceding as well as succeeding frames.

PART- B (16 Marks)

UNIT – I

1. (i) How will you calculate channel capacity? (2)

(ii)Write channel coding theorem and channel capacity theorem (5) (iii)Calculate the entropy for the given sample data AAABBBCCD (3) (iv)Prove Shannon information capacity theorem (6)

2. (i)Use differential entropy to compare the randomness of random variables (4) (ii)A four symbol alphabet has following probabilities

Pr(a0) =1/2

Pr(a0) = 1/4

Pr(a0) = 1/8

Pr(a0) = 1/8 and an entropy of 1.75 bits. Find a codebook for this four letter alphabet that satisfies source coding theorem (4)

(iii)Write the entropy for a binary symmetric source (4) (iv)Write down the channel capacity for a binary channel (4)

3. A discrete memory less source has an alphabet of five symbols whose probabilities of occurrence are as described here

Symbols: X1 X2 X3 X4 X5

Probability: 0.2 0.2 0.1 0.1 0.4

Compare the Huffman code for this source .Also calculates the efficiency of the source encoder (8)

(b) A voice grade channel of telephone network has a bandwidth of 3.4 kHz

Calculate

(i) The information capacity of the telephone channel for a signal to noise ratio of 30 dB and

(ii) The min signal to noise ratio required to support information transmission through the telephone channel at the rate of 9.6Kb/s (8)

4. A discrete memory less source has an alphabet of seven symbols whose probabilities of occurrence are as described below

Symbol: s0 s1 s2 s3 s4 s5 s6

Prob : 0.25 0.25 0.0625 0.0625 0.125 0.125 0.125

(i) Compute the Huffman code for this source moving a combined symbols as high as

possible (10)

(ii) Calculate the coding efficiency (4)

(iii) Why the computed source has a efficiency of 100% (2)

5. (i) Consider the following binary sequences 111010011000101110100.Use the Lempel

– Ziv algorithm to encode this sequence. Assume that the binary symbols 1 and 0 are already in the code book (12)

(ii)What are the advantages of Lempel – Ziv encoding algorithm over Huffman coding? (4)

6. A discrete memory less source has an alphabet of five symbols with their probabilities for its output as given here

[X] = [x1 x2 x3 x4 x5 ]

P[X] = [0.45 0.15 0.15 0.10 0.15]

Compute two different Huffman codes for this source .for these two codes .Find

(i) Average code word length

(ii) Variance of the average code word length over the ensemble of source symbols (16)

7. A discrete memory less source X has five symbols x1,x2,x3,x4 and x5 with probabilities p(x1) – 0.4, p(x2) = 0.19, p(x3) = 0.16, p(x4) = 0.15 and p(x5) = 0.1

(i) Construct a Shannon – Fano code for X,and Calculate the efficiency of the code (7)

(ii) Repeat for the Huffman code and Compare the results (9)

8. Consider that two sources S1 and S2 emit message x1, x2, x3 and y1, y2,y3 with joint probability P(X,Y) as shown in the matrix form.

3/40 1/40 1/40

P(X, Y) 1/20 3/20 1/20

1/8 1/8 3/8

Calculate the entropies H(X), H(Y), H(X/Y), and H (Y/X) (16)

9. Apply Huffman coding procedure to following massage ensemble and determine Average length of encoded message also. Determine the coding efficiency. Use coding alphabet D=4.there are 10 symbols.

X = [x1, x2, x3……x10]

P[X] = [0.18,.0.17,0.16,0.15,0.1,0.08,0.05, 0.05,0.04,0.2] (16)

UNIT II

1. (i)Compare and contrast DPCM and ADPCM (6) (ii) Define pitch, period and loudness (6)

(iii) What is decibel? (2)

(iv) What is the purpose of DFT? (2)

2. (i)Explain delta modulation with examples (6)

(ii) Explain sub-band adaptive differential pulse code modulation (6) (iii) What will happen if speech is coded at low bit rates (4)

3. With the block diagram explain DPCM system. Compare DPCM with PCM

& DM systems (16)

4. (i). Explain DM systems with block diagram (8)

(ii) Consider a sine wave of frequency fm and amplitude Am, which is applied to a delta modulator of step size .Show that the slope overload distortion will occur Information Coding Techniques

if Am > / ( 2fmTs)

Where Ts sampling. What is the maximum power that may be transmitted without slope overload distortion? (8)

5. Explain adaptive quantization and prediction with backward estimation in

ADPCM system with block diagram (16)

6. (i) Explain delta modulation systems with block diagrams (8)

(ii) What is slope – overload distortion and granular noise and how it is overcome in adaptive delta modulation (8)

7. What is modulation? Explain how the adaptive delta modulator works with different algorithms? Compare delta modulation with adaptive delta modulation (16)

8. Explain pulse code modulation and differential pulse code modulation (16)

UNIT III

1. Consider a hamming code C which is determined by the parity check matrix

1 1 0 1 1 0 0

H = 1 0 1 1 0 1 0 0

1 1 1 0 0 1

(i) Show that the two vectors C1= (0010011) and C2 = (0001111) are code words of C and Calculate the hamming distance between them (4)

(ii) Assume that a code word C was transmitted and that a vector r = c + e is received. Show that the syndrome S = r. H T only depends on error vector e. (4)

(iii) Calculate the syndromes for all possible error vectors e with Hamming weight <=1 and list them in a table. How can this be used to correct a single bit error in an arbitrary position? (4)

(iii) What is the length and the dimension K of the code? Why can the min Hamming distance dmin not be larger than three? (4)

2. (i) Define linear block codes (2)

(ii)How to find the parity check matrix? (4) (iii)Give the syndrome decoding algorithm (4)

(iv)Design a linear block code with dmin ≥ 3 for some block length n= 2m-1 (6)

3. a. Consider the generator of a (7,4) cyclic code by generator polynomial g(x) – 1+x+x3.Calculate the code word for the message sequence 1001 and Construct systematic generator matrix G. (8)

b. Draw the diagram of encoder and syndrome calculator generated by polynomial g(x)? (8)

4. For the convolution encoder shown below encode the message sequence

(10011). also prepare the code tree for this encoder (16)

Path 1

+

Msg Bits FF FF Output

Path 1

5. (i)Find a (7,4) cyclic code to encode the message sequence (10111) using generator matrix g(x) = 1+x+x3 (8)

(ii) Calculate the systematic generator matrix to the polynomial g(x) = 1+x+x3.Also draw the encoder diagram (8)

6. Verify whether g(x) = 1+x+x2+x3+x4 is a valid generator polynomial for generating a cyclic code for message  (16)

7. A convolution encoder is defined by the following generator polynomials: g0(x) = 1+x+x2+x3+x4

g1(x) = 1+x+x3+x4

g2(x) = 1+x2+x4

(i) What is the constraint length of this code? (4)

(ii) How many states are in the trellis diagram of this code ( 8) (iii) What is the code rate of this code? (4)

8. Construct a convolution encoder for the following specifications: rate efficiency =1/2

Constraint length =4.the connections from the shift to modulo 2 adders are described

by following equations

g1(x) = 1+x g2(x) = x

Determine the output codeword for the input message  (16)

UNIT IV

1. (i)Discuss the various stages in JPEG standard (9)

(ii)Differentiate loss less and lossy compression technique and give one example for each (4)

(iii)State the prefix property of Huffman code (3)

2. Write the following symbols and probabilities of occurrence, encode the

Message “went#” using arithmetic coding algorithms. Compare arithmetic coding with Huffman coding principles (16)

Symbols: e n t w #

Prob : 0.3 0.3 0.2 0.1 0.1

3. (a)Draw the JPEG encoder schematic and explain (10)

(b) Assuming a quantization threshold value of 16, derive the resulting quantization error for each of the following DCT coefficients

127, 72, 64, 56,-56,-64,-72,-128. (6)

4. (i)Explain arithmetic coding with suitable example (12)

(ii) Compare arithmetic coding algorithm with Huffman coding (4).

5. (i)Draw JPEG encoder block diagram and explain each block (14)

(ii) Why DC and AC coefficients are encoded separately in JPEG (2)

6. (a)Discuss in brief ,the principles of compression (12)

(b) in the context of compression for Text ,Image ,audio and Video which of the compression techniques discussed above are suitable and Why? (4)

7. (i)Investigate on the „block preparation „and quantization phases of

JPEG compression process with diagrams wherever necessary (8)

(ii) Elucidate on the GIFF and TIFF image compression formats (8)

UNIT V

1. (i)Explain the principles of perceptual coding (14)

(ii) Why LPC is not suitable to encode music signal? (2)

2. (i)Explain the encoding procedure of I,P and B frames in video encoding with suitable diagrams (14)

(ii) What are the special features of MPEG -4 standards (2)

3. Explain the Linear Predictive Coding (LPC) model of analysis and synthesis of speech signal. State the advantages of coding speech signal at low bit rates (16)

4. Explain the encoding procedure of I,P and B frames in video compression techniques, State intended application of the following video coding standard MPEG -1 , MPEG -2, MPEG -3 , MPEG -4 (16)

5. (i)What are macro blocks and GOBs? (4)

(ii) On what factors does the quantization threshold depend in H.261 standards? (3)

(iii)Discuss the MPEG compression techniques (9)

6. (i)Discuss about the various Dolby audio coders (8)

(ii) Discuss about any two audio coding techniques used in MPEG (8)

7. Discuss in brief, the following audio coders: MPEG audio coders (8)

DOLPY audio coders (8)

8. (i)Explain the „Motion estimation‟ and „Motion Compensation „ phases of P and

B frame encoding process with diagrams wherever necessary (12)

(ii) Write a short note on the „Macro Block „format of H.261

compression standard (4)

*************************

Department of Information Technology IT2302-Information Theory Coding