1461183113-09fb2e0d-2154-48f4-83cd-40a1041a464f

1. A method comprising:
partitioning a database corresponding to object images into a first partition and a second partition based on a fuzzy similarity analysis of a measure of the object images to a first threshold;
partitioning each of the first partition and the second partition into at least two portions so that the measure of the object images having a fuzzy similarity more than or equal to a second threshold cluster into a selected one of the at least two portions;
determining a feature set from image content of a query object image;
after partitioning the first partition into the at least two portions. using fuzzy logic to search the database for at least one image similar to the query object image; and
outputting the at least one image similar to the query object image.
2. The method of claim 1 further comprising:
deriving the feature set for each of the object images from contours of at least two views of objects corresponding to each of the object images.
3. The method of claim 1, wherein using the fuzzy logic comprises comparing one object image from each of said first and second partitions with said query object image.
4. The method of claim 3, further comprising:
based on the comparison, obtaining the at least one similar image as a match in the partition that indicates maximum similarity with said query object image.
5. The method of claim 1, further comprising:
forming a similarity matrix for the object images within the database before partitioning the database.
6. A method comprising:
partitioning a database corresponding to object images into a plurality of sets based on fuzzy logic;
obtaining a query image;
after partitioning the database into the plurality of sets, searching the database for a solution set having a maximum similarity to the query image using fuzzy logic, and
outputting at least a portion of the solution set.
7. The method of claim 6, wherein searching the database comprises comparing a single image of each of a plurality of sets within the database to the query image.
8. The method of claim 7, wherein comparing the single image comprises comparing a feature vector of the query image to a corresponding feature vector of the single image.
9. The method of claim 6, further comprising partitioning the database into a plurality of levels, each of the levels corresponding to a similarity threshold.
10. The method of claim 6, wherein outputting a portion of the solution set comprises displaying at least one object image corresponding to the portion of the solution set.
11. An article comprising a machine-readable storage medium containing instructions that if executed enable a system to:
partition a database corresponding to object images into a plurality of sets based on fuzzy logic;
obtain a query image;
after the database is partitioned, search the database for a solution set having a maximum similarity to the query image using the fuzzy logic; and
output at least a portion of the solution set.
12. The article of claim 11, further comprising instructions that if executed enable the system to compare a single image of each of a plurality of sets within the database to the query image.
13. The article of claim 12, further comprising instructions that if executed enable the system to compare a feature vector of the query image to a corresponding feature vector of the single image.
14. A system comprising:
a dynamic random access memory containing instructions that when executed enable the system to partition a database corresponding to object images into a first partition and a second partition based on a fuzzy similarity analysis of a measure of the object images to a first threshold; to thereafter use fuzzy logic to search the database for at least one image similar to a query object image; and to output the at least one image similar to query object image; and
a processor coupled to the dynamic random access memory to execute the instructions.
15. The system of claim 14, further comprising instructions that when executed enable the system to derive a feature set for each of the object images from contours of at least two views of objects corresponding to each of the object images.
16. The system of claim 14, further comprising instructions that when executed enable the system to obtain the at least one similar image as a match in the partition that indicates maximum similarity with said query object image.
17. The system of claim 16, further comprising a display coupled to the processor to display the query object image and the at least one similar image.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A method for encoding a digitized image comprising at least one image object having a plurality of picture elements, wherein encoding information is allocated to the plurality of picture elements, the method comprising the steps of:
grouping the plurality of picture elements to form at least one image block;
determining a DC portion of the encoding information allocated to the plurality of picture elements contained in at least one part of the at least one image block;
subtracting the DC portion from the encoding information allocated to the plurality of picture elements of the at least one part of the at least one image block containing an edge of the image object to achieve a subtraction result; and
transforming the subtraction result by a shape-adaptive transformation encoding to achieve transformed encoding information.
2. The method according to claim 1, wherein the transformation encoding is performed such that signal energy of the encoding information of the picture elements of the at least one part of the at least one image block within a location domain is substantially equal to signal energy of the transformed encoding information of the picture elements of the at least one part of the at least one image block within a frequency domain.
3. The method according to claim 1, wherein the subtraction result is comprised of a plurality of difference values dj, and transformation coefficients cj are generated from the plurality of difference values dj according to the equation:
c
_

j

=
2
N
\xb7
DCT

N

_
\u2062
(

p
,
k

)

\xb7
d
_

j
wherein N is a quantity of an image vector to be transformed in which the picture elements are contained, DCT-N is a transformation matrix of size N*N, and p,k are indices where p,k \u03b5 0, N\u22121.
4. The method according to claim 1, wherein the step of subtracting the DC portion from the encoding information is only applied to edge image blocks that are encoded during an intra-image encoding mode.
5. The method according to claim 1, further comprising the step of:
scaling the DC portion.
6. A method for decoding a digitized image comprised of at least one image object having a plurality of picture elements, wherein the plurality of picture elements have been shape-adaptive transformation encoded into transformed encoding information, the plurality of picture elements are grouped to form at least one image block and a DC portion of encoding information of picture elements contained within the at least one image block is allocated to the at least one image block, the method comprising steps of:
inverse transformation encoding the plurality of picture elements having been shape-adaptive transformation encoded for at least one part of the at least one image block to achieve inverse transformed encoding information; and
adding the DC portion to each picture element of the at least one image block containing an edge of the image object and having been inverse transformation encoded to achieve an addition result.
7. The method according to claim 6, wherein inverse transformation coding is performed such that signal energy of the encoding information of the picture elements of the at least one part of each edge image block within a location domain is substantially equal to signal energy of the transformed encoding information of the picture elements of the at least one part of each edge image block within a frequency domain.
8. The method according to claim 6, wherein the addition result is comprised of a plurality of difference values dj, that are generated from transformation coefficients cj contained within the transformed encoding information to the equation:
d
j

=
2
N
\xb7
(
DCT

N

_

\u2062

(

p
,
k

)
)

1
\xb7
c
_

j
wherein N is a quantity of an image vector to be transformed in which the picture elements are contained, DCT-N is a transformation matrix of size N*N, and p,k are indices where p,k \u03b5 0, N\u22121 and (*)\u22121 is an inverse of a matrix.
9. The method according to claim 6, wherein the step of adding the DC portion to each picture element which has been inverse transformation encoded is only applied to edge image blocks that are encoded during an intra-image encoding mode.
10. The method according to claim 6, wherein the DC portion is scaled.
11. An apparatus for encoding a digitized image having at least one image object that is comprised of a plurality of picture elements that are allocated encoding information, the apparatus comprising:
a processor unit configured to:
group the plurality of picture elements to form at least one image block;
determine a DC portion of the encoding information allocated to the plurality of picture elements contained in at least one part of the at least one image block;
subtract the DC portion from the encoding information allocated to the plurality of picture elements of the at least one part of the at least one image block containing an edge of the image object to achieve a subtraction result; and
transform the subtraction result by shape-adaptive transformation encoding to achieve transformed encoding information.
12. The apparatus according to claim 11, wherein the processor unit is further configured to perform transformation encoding such that signal energy of the encoding information of the picture elements of the at least one part of the at least one image block within a location domain is substantially equal to signal energy of the transformed encoding information of the picture elements of the at least one part of the at least one image block within a frequency domain.
13. The apparatus according to claim 11, wherein the processor unit is configured to derive the subtraction such that the subtraction result is comprised of a plurality of difference values dj, and transformation coefficients cj are generated from the plurality of difference values dj according to the equation:
c
_

j

=
2
N
\xb7
DCT

N

_
\u2062
(

p
,
k

)

\xb7
d
_

j
wherein N is a quantity of an image vector to be transformed in which the picture elements are contained, DCT-N is a transformation matrix of size N*N, and p,k are indices where p,k \u03b5 0, N\u22121.
14. The apparatus according to claim 11, wherein the processor unit is configured such that subtraction of the DC portion from the encoding information is only applied to edge image blocks that are encoded during an intra-image encoding mode.
15. The apparatus according to claim 11, wherein the processor unit is configured to scale the DC portion.
16. An apparatus for decoding a digitized image comprised of at least one image object having a plurality of picture elements, wherein the plurality of picture elements have been shape-adaptive transformation encoded into transformed encoding information, the plurality of picture elements are grouped to form at least one image block and a DC portion of encoding information of picture elements contained within the at least one image block is allocated to the at least one image block, the apparatus comprising:
a processor unit configured to:
inverse transformation encode the plurality of picture elements having been shape-adaptive transformation encoded for at least one part of the at least one image block to achieve inverse transformed encoding information; and
add the DC portion to each picture element of the at least one image block containing an edge of the image object and having been inverse transformation encoded to achieve an addition result.
17. The apparatus according to claim 16, wherein the processor unit performs inverse transformation coding such that signal energy of the encoding information of the picture elements of the at least one part of each edge image block within a location domain is substantially equal to signal energy of the transformed encoding information of the picture elements of the at least one part of each edge image block within a frequency domain.
18. The apparatus according to claim 16, wherein the processor unit is configured to derive the addition result such that the addition result is comprised of a plurality of difference values dj, that are generated from the transformed encoding information according to the equation:
d
_

j

=
2
N
\xb7
(
DCT

N

_

\u2062

(

p
,
k

)
)

1
\xb7
c
_

j
wherein N is a quantity of an image vector to be transformed in which the picture elements are contained, DCT-N is a transformation matrix of size N*N, and p,k are indices where p,k \u03b5 0, N\u22121 and (*)\u22121 is an inverse of a matrix.
19. The apparatus according to claim 16, wherein the processor unit is configured such that addition of the DC portion to each picture element having been inverse transformation encoded is only applied to edge image blocks that are encoded during an intra-image encoding mode.
20. The apparatus according to claim 16, wherein the processor unit is configured to scale the DC portion.
21. An apparatus for encoding a digitized image, the image comprised of at least one image object having a plurality of picture elements, at least one portion of the picture elements being grouped into at least one image block, comprising:
a processing unit including:
an processing unit input receiving the at least one image block comprised of the at least one portion of the plurality of picture elements;
a first switching unit connected to the input, the first switching unit having first and second input contacts and corresponding first and second switching positions, and an output;
a subtraction unit connected between the processing unit input and the second input contact of the first switching unit;
a transformation encoding unit connected to the output of the first switching unit for encoding the image block according to a prescribed transformation; and
a memory connected to the processing unit input and to the subtraction unit, the memory storing luminance information of a preceding image block;
wherein the subtraction unit subtracts luminance information of the at least one image block from the luminance information of the preceding image block stored in the memory; and
wherein the first switching unit is in the first position connecting the processing unit input to the transformation encoding unit when the processing unit is operating in a first mode, and the first switching unit is in the second position connecting the subtraction unit to the transformation encoding unit when the processing unit is operating in a second mode.
22. The apparatus according to claim 21, further comprising:
an inverse transformation encoding unit connected to an output of the transformation encoding unit for decoding the encoded image block and outputting decoded image information;
an addition unit connected to an output of the inverse transformation encoding unit; and
a second switching unit having first and second switching positions that is connected to the first switching unit so that the switching positions of the second switching unit correspond to the switching positions of the first switching unit, the second switching unit connected to the addition unit, the subtraction unit and the memory;
wherein the second switching unit connects to the memory to the addition unit when the processing unit is operating in the second mode and the luminance information of the preceding image block is added to the decoded image information.
23. The apparatus according to claim 21, wherein the first mode is an inter-image encoding mode and the second mode is an intra-image encoding mode.
24. The apparatus according to claim 21, wherein the prescribed transformation is a shape-adaptive discrete cosine transformation.
25. A method for encoding a digitized image, the image comprised of at least one image object having a plurality of picture elements, at least one portion of the picture elements being grouped into at least one image block, comprising:
receiving the at least one image block comprised of the at least one portion of the plurality of picture elements at an input of a processing unit;
transmitting the at least one image block to a first switching unit connected to the input of the processing unit, the first switching unit having first and second input contacts and corresponding first and second switching positions, and an output;
encoding the image block according to a prescribed transformation via a transformation encoding unit;
storing luminance information of a preceding image block in a memory; and
subtracting luminance information of the at least one image block from the luminance information of the preceding image block stored in the memory, wherein the first switching unit is in the first position connecting the processing unit input to the transformation encoding unit when the processing unit is operating in a first mode, and the first switching unit is in the second position connecting the subtraction unit to the transformation encoding unit when the processing unit is operating in a second mode.
26. The method according to claim 25, further comprising:
decoding the encoded image block and outputting decoded image information via an inverse transformation encoding unit connected to an output of the transformation encoding unit;
transmitting the decoded image block to an addition unit connected to an output of the inverse transformation encoding unit; and
providing a second switching unit having first and second switching positions that is connected to the first switching unit so that the switching positions of the second switching unit correspond to the switching positions of the first switching unit, the second switching unit connected to the addition unit, the subtraction unit and the memory;
wherein the second switching unit connects to the memory to the addition unit when the processing unit is operating in the second mode and the luminance information of the preceding image block is added to the decoded image information.
27. The method according to claim 25, wherein the first mode is an inter-image encoding mode and the second mode is an intra-image encoding mode.
28. The method according to claim 25, wherein the prescribed transformation is a shape-adaptive discrete cosine transformation.