Page 1 of 1

What is the magic constant 1.079?

Posted: Tue Oct 14, 2014 10:36 am
by codinghead
Hi All,

On the wiki, here:

http://wiki.redpitaya.com/index.php?tit ... de#Example

there is an example on how to modify the acquire application.

Could someone explain where the constant 1.079 that is used for the calibration of the front end results comes from?

Code: Select all

float cal_fe = 1.079/8191;
This seems to give a weighting per bit of 131.73uV, whereas I was working with 122uV per bit. 131uV results in conversions to the correct voltage, but I don't understand where this value came from.

Thanks in advance, Stuart

Re: What is the magic constant 1.079?

Posted: Tue Oct 28, 2014 12:06 am
by sa-penguin
As a quick check, 2^14 (14-bit ADC) * 131.73uV = 2.158V.
The input is rated at +/- 1V.

So I'm assuming the input "protection" kicks in before the ADC has reached its full range.
I have to guess, because this is the kind of thing you'd verify by checking the schematic. Which I have not yet found.

Anyway... since full scale voltage is slightly higher, the step size is also slightly higher.
Hence the correction factor.

Now you've got me wondering what the ADC chip is...

Re: What is the magic constant 1.079?

Posted: Tue Oct 28, 2014 12:33 am
by Nils Roos