### LIC–AD and DA Converters (Unit 4)–2 Marks

Anna University

147404 - LINEAR INTEGRATED CIRCUITS

UNIT-IV

ANALOG TO DIGITAL AND DIGITAL TO ANALOG CONVERTERS

1. Explain the operation of basic sample and hold circuit.

A typical sample and hold circuit stores electric charge in a capacitor and contains at least one fast FET switch and at least one operational amplifier. To sample the input signal the switch connects the capacitor to the output of a buffer amplifier. The buffer amplifier charges or discharges the capacitor so that the voltage across the capacitor is practically equal, or proportional to, input voltage. In hold mode the switch disconnects the capacitor from the buffer. The capacitor is invariably discharged by its own leakage currents and useful load currents, which makes the circuit inherently volatile, but the loss of voltage (voltage droop) within a specified hold time remains within an acceptable error margin.

2. State the advantages and applications of sample and hold circuits.

A sample and hold circuit is one which samples an input signal and holds on to its last sampled value until the input is sampled again. This circuit is mainly used in digital interfacing, analog to digital systems, and pulse code modulation systems.

3. List the drawbacks of binary weighted resistor technique of D/A conversion.

a) Wide range of resistor values needed

b) Difficulty in achieving and maintaining accurate ratios over a wide range of variations

Flash type ADC is the fastest as well as the most expensive.

The disadvantage is the number of comparators needed almost doubles for each added bit (For a n-bit convertor 2(n-1) comparators, 2n resistors are required).

5. The basic step of a 9 bit DAC is 10.3 mV. If 000000000 represents 0Volts, what is the output for an input of 101101111?

The output voltage for input of 101101111 is

= 10.3 mV (1*28+0*27+1*26+1*25+0*24+1*23+1*22+1*21+1*20)

= 10.3 * 10-3 * 367 = 3.78 V

6. Why does the dual slope ADC provide excellent noise rejection of AC signal whose periods are integral multiples of the integration time?

7. Find the resolution of a 12 bit DAC converter.

Resolution (volts) = VFS/(212-1) = I LSB increment

VFS – Full scale voltage

a) Easier to build accurately as only two precision metal films are required.

b) Number of bits can be expanded by adding more sections of same R/2R values.

a) In this type of DAC, when there is a change in the input, changes the current flow in the resistor which causes more power dissipation which creates non-linearity in DAC.

9. Define start of conversion and end of conversion.

Start of Conversion in ADC (SOC): This is the control signal for start of conversion which initiates A/D conversion process.

End of Conversion in ADC (EOC): This is the control signal which is activated when the conversion is completed.

10. What are the types of ADC and DAC.

1. Flash (comparator) type converter

2. Counter type converter

3. Tracking or servo converter

4. Successive approximation type converter

Types of DAC:

1. Weighted resistor DAC

11. What is the difference between direct ADC and integrating type ADC.

a) The integrating type of ADC’s do not need a sample/Hold circuit at the input.

b) It is possible to transmit frequency even in noisy environment or in an isolated form.

12. Define following performance parameters of D/A converters: (8)

a) Resolution

The resolution of a converter is the smallest change in voltage which may be produced at the output or input of the converter.

Resolution (in volts)= VFS/2n-1=1 LSB increment. The resolution of an ADC is defined as the smallest change in analog input for a one bit change at the output.

b) Accuracy

Absolute accuracy:

It is the maximum deviation between the actual converter output & the ideal converter output.

Relative accuracy:

It is the maximum deviation after gain & offset errors have been removed.

The accuracy of a converter is also specified in form of LSB increments or % of full scale voltage.

c) Monotonicity

A monotonic DAC is one whose analog output increases for an increase in digital input.

d) Conversion time

It is defined as the total time required to convert an analog signal into its digital output. It depends on the conversion technique used & the propagation delay of circuit components.

The conversion time of a successive approximation type ADC is given by

T(n+1)

where T---clock period

Tc---conversion time n----no. of bits