The procedure in the datasheet is straightforward. However, many are confused in the graph shown in the datasheet:
![]() |
Note: I only used DC signal for the whole calibration process. (No AC signal was used in calibration process)
Here are the steps/example for the calibration(only for Gain Calibration, I didn't include the Offset calibration as it is very easy to understand from the datasheet):
1. Inject a known DC voltage signal. For example, I will be injecting a well-calibrated voltage of 163.22mV. (you can use any voltage here depending on your preference, but this is the voltage I used on my computations.)
2. Perform the AC gain calibration.
3. After the AC gain calibration (and if done right), 163.22mV corresponds to 0.600 (instantaneous register value).
That's it. On actual operation (after being calibrated), if I got a value of 0.510, I just need to do a simple ratio and proportion to get the corresponding voltage.
163.22mV/0.600 = x/0.510
x=138.737mV; The voltage currently present at the channel being measured
With this method, I was able to compare the readings I got with a calibrated Chroma Power Meter versus my project. (there are small difference on the current reading due to the wrong type of current transducer that I used in the initial phase of this project)
![]() |
With the data I got, I conclude that the method is accurate and precise. :)