How are the signed floating point values read from the ADC stored in 14 bits?
Posted: Sun Aug 16, 2020 7:41 am
Hello all.
I am writing a custom app and would like to do processing on the signals from the ADC as well as configure the signals the DAC generates. I understand that the ADC and DAC have 14 bit resolution, but I do not understand how to work with this data. The IEEE standards I have seen for representing signed floating point values in binary have a standardized format for 32 and 64 bit data, but I am unsure how to apply that to 14 bit data. Any help is appreciated
I am writing a custom app and would like to do processing on the signals from the ADC as well as configure the signals the DAC generates. I understand that the ADC and DAC have 14 bit resolution, but I do not understand how to work with this data. The IEEE standards I have seen for representing signed floating point values in binary have a standardized format for 32 and 64 bit data, but I am unsure how to apply that to 14 bit data. Any help is appreciated