1. A method for determining a state of charge of a vehicle battery comprising:
providing a computer-based system comprising an input, an output, a processor, memory and program code comprising at least a confidence evaluator;
operating the program code such that calculations made thereby are performed by said processor;
receiving into said system sensor data indicative of a voltage, current, and temperature of the battery;
determining with said system a first state of charge value using a voltage-based strategy on the sensor data;
calculating with said system a first confidence value for the first state of charge value;
determining with said system a second state of charge value using a current-based strategy on the sensor data;
calculating with said system a second confidence value for the second state of charge value;
comparing the first confidence value and second confidence value with the confidence evaluator;
selecting between the first state of charge value and the second state of charge value based on the comparison; and
storing the selected state of charge value in said system as an overall state of charge value.
2. The method of claim 1, wherein the first confidence value is calculated using the tolerance of a voltage sensor, the rest time of the battery, and the diffusion constant of the battery.
3. The method of claim 1, wherein the second confidence value is calculated using the tolerance of a current sensor.
4. The method of claim 1, further comprising providing the overall state of charge value to a display device.
5. The method of claim 1, wherein the voltage-based strategy uses linear regression to determine an open circuit voltage value.
6. The method of claim 1, further comprising calculating an asymmetric confidence range for the first state of charge value based in part on the first confidence value.
7. A system for determining a state of charge of a vehicle battery comprising:
an interface configured to receive sensor data from a voltage sensor, a current sensor, and a temperature sensor connected to the battery;
a voltage-based state of charge generator configured to generate a first state of charge value using a voltage-based strategy on the sensor data;
a voltage-based confidence value generator configured to calculate a first confidence value for the first state of charge value;
a current-based state of charge generator configured to generate a second state of charge value using a current-accumulation strategy on the sensor data;
a current-based confidence value generator configured to calculate a second confidence value for the second state of charge value;
a confidence value evaluator configured to compare the first confidence value and second confidence value, and
a state of charge storage configured to store the first or the second state of charge value as an overall state of charge value, based on the comparison.
8. The system of claim 7, further comprising:
a battery rest timer configured to determine a rest time of the battery; and
wherein the voltage-based confidence value generator is configured to calculate the first confidence value using a tolerance of the voltage sensor, the rest time of the battery, and a diffusion constant of the battery.
9. The system of claim 8, wherein the current-based confidence value generator is configured to calculate the second confidence value using a tolerance of the current sensor.
10. The system of claim 7, further comprising an interface configured to provide the overall state of charge value to a display device.
11. The system of claim 7, wherein the voltage-based state of charge generator uses linear regression to determine the first state of charge value.
12. The system of claim 7, wherein the voltage-based state of charge generator uses a direct voltage measurement to determine the first state of charge value.
13. The system of claim 7, wherein the voltage-based state of charge generator uses both linear regression and a direct voltage measurement to determine the first state of charge value.
14. A system for determining a state of charge of a vehicle battery comprising:
a vehicle battery;
a temperature sensor configured to measure a temperature of the battery;
a current sensor configured to measure a current of the battery;
a voltage sensor configured to measure a voltage of the battery;
a memory storing one or more state of charge values for the battery; and
a processor coupled to the memory and configured to:
receive sensor data from the sensors indicative of a voltage, current, and temperature of the battery;
determine a first state of charge value using a voltage-based strategy on the sensor data;
calculate a first confidence value for the first state of charge value;
determine a second state of charge value using a current-based strategy on the sensor data;
calculate a second confidence value for the second state of charge value;
compare the first confidence value and second confidence value;
select between the first state of charge value and the second state of charge value based on the comparison; and
store the selected state of charge value in the memory as an overall state of charge value.
15. The system of claim 14, wherein the memory further stores a tolerance of the voltage sensor, a rest time of the battery, and a diffusion constant of the battery; and wherein the processor is further configured to calculate the first confidence value using the tolerance of the voltage sensor, the rest time of the battery, and the diffusion constant of the battery.
16. The system of claim 14, wherein the memory further stores a tolerance of the current sensor; and wherein the processor is further configured to calculate the second confidence value using the tolerance of the current sensor.
17. The system of claim 14, wherein the processor is further configured to provide the overall state of charge value to a display device.
18. The system of claim 14, wherein the voltage-based strategy uses linear regression to determine the first state of charge value.
19. The system of claim 14, wherein the processor is further configured to determine an asymmetric confidence range for the first and the second state of charge value, and configured to use the asymmetric confidence range to select between the first state of charge value and the second state of charge value.
20. The system of claim 14, wherein the battery is a lithium-iron-phosphate battery.
The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.
1. An image display system, comprising
a plurality of pixels, each relating to a mura compensation coefficient set;
a memory, storing the mura compensation coefficient sets of the pixels; and
an ASIC, retrieving the mura compensation coefficient sets from the memory and forming different mura compensation function sets with different mura compensation coefficient sets, wherein each mura compensation function set is used for transforming an original gray level of the corresponding pixel to a mura-compensated gray level that is used in driving the pixel;
wherein the mura compensation coefficient sets are generated by a coefficient generator comprising:
a plurality of sensing units, sensing the pixels and outputting sensed data;
an average luminance measuring instrument, measuring average luminance of all of the pixels; and
a processing unit, transforming the sensed data to luminance data based on the average luminance, wherein the luminance datum and the sensed datum follow the following equation:
L=LAVG\xb7(GGAVG)r,
where:
L represents the luminance datum,
LAVG represents the average luminance of all of the pixels,
G represents the sensed datum,
GAVG represents the average value of the sensed data of all pixels, and
r represents an adjusting factor, dependent on a sensed data\u2014actual luminance linearity;
wherein the processing unit provides at least one test gray level to test the pixels and generate the mura compensation coefficient set of each pixel according to the relationship between the test gray level and the corresponding luminance datum.
2. The system as claimed in claim 1, wherein the processing unit tests the pixels by more than one test gray level and collects the corresponding luminance data to generate a gray level\u2014luminance datum relationship model for each pixel.
3. The system as claimed in claim 2, wherein each mura compensation function set comprises an original gray level\u2014expected luminance transformation,
x
e
=
L
peak
\xb7
(
y
o
255
)
\u03b3
,
where yo represents the original gray level, Lpeak represents a peak luminance, \u03b3 represents a gamma coefficient, and xe represents an expected luminance corresponding to yo while Lpeak and \u03b3 are satisfied.
4. The system as claimed in claim 3, wherein each gray level\u2014luminance datum relationship model is described by the following equation:
y=a\xb7xn+b\xb7x2+c\xb7x+d,
where
y represents the gray level actually driving the pixel,
x represents the luminance datum corresponding to y,
n represents an exponential factor, dependent on \u03b3, and
a, b, c, d and n form the mura compensation coefficient set of the pixel.
5. The system as claimed in claim 4, wherein each mura compensation function set further comprises an expected luminance\u2014mura-compensated gray level transformation, yc=a\xb7xen+b\xb7xe2+c\xb7xe+d, where yc represents the mura-compensated gray level corresponding to xe.
6. The system as claimed in claim 3, wherein the each gray level\u2014luminance datum relationship model is described by the following equation:
y=a\xb7xn+b\xb7x+c,
where
y represents the gray level actually driving the pixel,
x represents the luminance datum corresponding to y,
n represents an exponential factor, dependent on the illumination of the panel area the pixel located, and
a, b, c and n form the mura compensation coefficient set of the pixel.
7. The system as claimed in claim 6, wherein each mura compensation function set further comprises an expected luminance\u2014mura-compensated gray level transformation, yc=a\xb7xen+b\xb7xe+c, where yc represents the mura-compensated gray level corresponding to xe.
8. The system as claimed in claim 1, further comprising a display panel, comprising the pixels, the memory and the ASIC.
9. The system as claimed in claim 8, further comprising an electronic device, comprising:
the display panel; and
an input unit, coupled to the display panel to receive images to be displayed by the display panel.
10. The system as claimed in claim 9, wherein the electronic device is a cell phone, a digital camera, a personal digital assistant, a notebook, a desktop, a television, a car display panel, or a portable DVD player.
11. A method for compensating mura defect, comprising:
providing a plurality of sensing units for a plurality of pixels of a pixel array to generate sensed data of the pixels;
providing an average luminance measuring instrument to measure an average luminance of all of the pixels;
providing a processing unit, transforming the sensed data to luminance data based on the average luminance, wherein the luminance datum and the sensed datum follow the following equation:
L=LAVG\xb7(GGAVG)r,
where:
L represents the luminance datum,
LAVG represents the average luminance of all of the pixels,
G represents the sensed datum,
GAVG represents the average value of the sensed data of all pixels, and
r represents an adjusting factor, dependent on a sensed data\u2014actual luminance linearity;
providing at least one test gray level to test the pixels and generate a mura compensation coefficient set for each pixel according to the relationship between the test gray level and the corresponding luminance datum;
storing the mura compensation coefficient sets into a memory;
providing an ASIC to retrieve the mura compensation coefficient sets from the memory and form different mura compensation function sets with different mura compensation coefficient sets, wherein each mura compensation function set is used for transforming an original gray level of the corresponding pixel to a mura-compensated gray level; and
driving the pixels by the mura-compensated gray levels.
12. The method as claimed in claim 11, further comprising testing the pixels by more than one test gray level and collecting the corresponding luminance data to generate a gray level\u2014luminance datum relationship model for each pixel.
13. The method as claimed in claim 12, wherein each gray level\u2014luminance datum relationship model is described by the following equation:
y=a\xb7xn+b\xb7x2+c\xb7x+d,
where
y represents the gray level actually driving the pixel,
x represents the luminance datum corresponding to y,
n represents an exponential factor, dependent on a gamma factor, and
a, b, c, d and n form the mura compensation coefficient set of the pixel.
14. The method as claimed in claim 13, wherein each mura compensation function set comprises an original gray level\u2014expected luminance transformation,
x
e
=
L
peak
\xb7
(
y
o
255
)
\u03b3
,
where yo represents the original gray level, Lpeak represents a peak luminance, \u03b3 represents the gamma coefficient, and xe represents an expected luminance corresponding to yo while Lpeak and \u03b3 are satisfied.
15. The method as claimed in claim 14, wherein each mura compensation function set further comprises an expected luminance\u2014mura-compensated gray level transformation, yc=a\xb7xen+b\xb7xe2+c\xb7xe+d, where yc represents the mura-compensated gray level corresponding to xe.
16. The method as claimed in claim 12, wherein the each gray level\u2014luminance datum relationship model is described by the following equation:
y=a\xb7xn+b\xb7x+c,
where
y represents the gray level actually driving the pixel,
x represents the luminance datum corresponding to y,
n represents an exponential factor, dependent on the illumination of the panel area the pixel located, and
a, b, c and n form the mura compensation coefficient set of the pixel.
17. The method as claimed in claim 16, wherein the exponential factor is set by:
dividing the pixel array into a plurality of regions according to the luminance of the pixels;
sampling pixels in each region and estimating the exponential factors of the sampled pixels;
averaging the estimated exponential factors in each region to get an average exponential factor of each region; and
assigning the average exponential factor to all pixels in the corresponding region as the exponential factors of the pixels.
18. The method as claimed in claim 16, wherein each mura compensation function set comprises an original gray level\u2014expected luminance transformation,
x
e
=
L
peak
\xb7
(
y
o
255
)
\u03b3
,
where yo represents the original gray level, Lpeak represents a peak luminance, \u03b3 represents the gamma coefficient, and xe represents an expected luminance corresponding to yo while Lpeak and \u03b3 are satisfied.
19. The method as claimed in claim 18, wherein each mura compensation function set further comprises an expected luminance\u2014mura-compensated gray level transformation, yc=a\xb7xen+b\xb7xe+c, where yc represents the mura-compensated gray level corresponding to xe.
20. The method as claimed in claim 11, further comprising executing a luminance datum\u2014ideal gray level transformation,
y
r
=
(
x
t
L
peak
)
1
\u03b3
\xb7
255
,
where xt represents the luminance datum, Lpeak and \u03b3 represent a peak luminance and a gamma factor of the corresponding pixel, respectively, and yr represents an idea gray level corresponding to xt while Lpeak and \u03b3 are satisfied.
21. The method as claimed in claim 20, further comprising testing the pixels by more than one test gray level and, for each pixel, calculating gray level differences between the test gray levels and the corresponding ideal gray levels and regarding the gray level differences as the mura compensation coefficient set of the corresponding pixel.
22. The method as claimed in claim 21, wherein the behavior of the mura compensation function set further comprises:
determining the value of the original gray level of the corresponding pixel to find out the test gray level near the original gray level; and
adjusting the original gray level by the gray level difference corresponding to the test gray level to get the mura-compensated gray level.
23. The method as claimed in claim 20, further comprising calculating a gray level difference between the test gray level and the ideal gray level for each pixel, and regarding the gray level difference and a plurality of weight factors as the mura compensation coefficient set of the corresponding pixel.
24. The method as claimed in claim 23, wherein the behavior of the mura compensation function set further comprises:
determining the value of the original gray level of the corresponding pixel to find out the weight factor corresponding to the original gray level;
multiplying the gray level difference by the weight factor to get a weighted gray level difference; and
adjusting the original gray level by the weighted gray level difference to get the mura-compensated gray level.