The first digital voltmeter was invented and produced by Andrew Kay of Non-Linear Systems (and later founder of Kaypro) in 1954. Digital voltmeters usually employ an electronic circuit that acts as an integrator, linearly ramping output voltage when input voltage is constant (this can be easily realized with an opamp).
The dual-slope integrator method applies a known reference voltage to the integrator for a fixed time to ramp the integrator's output voltage up, then the unknown voltage is applied to ramp it back down, and the time to ramp output voltage down to zero is recorded (realized in an ADC implementation). The unknown voltage being measured is the product of the voltage reference and the ramp-up time divided by the ramp-down time.
The voltage reference must remain constant during the ramp-up time, which may be difficult due to supply voltage and temperature variations. Part of the problem of making an accurate voltmeter is that of calibration to check its accuracy. In laboratories, the Weston Cell is used as a standard voltage for precision work. Precision voltage references are available based on electronic circuits. Digital voltmeters necessarily have input amplifiers, and, like vacuum tube voltmeters, generally have a constant input resistance of 10 megohms regardless of set measurement range.
Thursday, March 12, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment