1461185519-4f9faf94-d822-47bb-8eb7-076b6394cdad

1-32. (canceled)
33. A computer-implemented method for providing content based on a knowledge representation (KR), the method comprising:
obtaining user context information associated with a user, wherein the user context information includes information regarding an attribute of the user, information regarding an activity of the user, andor information provided by the user;
identifying a group of one or more concepts relevant to the user context by performing a synthesis operation on a user KR, wherein the user KR includes at least one kernel module combined with at least one user-specific customized module;
identifying content information corresponding to the identified group of one or more concepts; and
providing the identified content information to the user.
34. The computer-implemented method of claim 33, wherein the identified content includes documents, audiovisual information, tweets, emails, messages posted on a social networking platform, blog entries, andor any combination thereof.
35. The computer-implemented method of claim 33, further comprising ranking the content information provided to the user based on relevance of the content information to the identified group of one or more concepts.
36. The computer-implemented method of claim 33, further comprising combining the at least one kernel module with the at least one user-specific customized module, the combining comprising an entity resolution act comprising:
identifying a first concept associated with the kernel module and identifying a second concept associated with the user-specific customized module;
matching the first concept with the second concept based on an identifier andor pattern; and
merging the first concept and the second concept into a third concept.
37. The computer-implemented method of claim 33, wherein the user context information is part of an interest network corresponding to information provided by the user.
38. The computer-implemented method of claim 37, further comprising providing the interest network to an analysis engine for deconstruction.
39. At least one non-transitory computer-readable storage medium storing computer-executable instructions that, when executed, perform a method for providing content based on a knowledge representation (KR), the method comprising:
obtaining user context information associated with a user, wherein the user context information includes information regarding an attribute of the user, information regarding an activity of the user, andor information provided by the user;
identifying a group of one or more concepts relevant to the user context by performing a synthesis operation on a user KR, wherein the user KR includes at least one kernel module combined with at least one user-specific customized module;
identifying content information corresponding to the identified group of one or more concepts; and
providing the identified content information to the user.
40. The at least one non-transitory computer-readable storage medium of claim 39, wherein the identified content includes documents, audiovisual information, tweets, emails, messages posted on a social networking platform, blog entries, andor any combination thereof.
41. The at least one non-transitory computer-readable storage medium of claim 39, wherein the method further comprises ranking the content information provided to the user based on relevance of the content information to the identified group of one or more concepts.
42. The at least one non-transitory computer-readable storage medium of claim 39, wherein the method further comprises combining the at least one kernel module with the at least one user-specific customized module, the combining comprising an entity resolution act comprising:
identifying a first concept associated with the kernel module and identifying a second concept associated with the user-specific customized module;
matching the first concept with the second concept based on an identifier andor pattern; and
merging the first concept and the second concept into a third concept.
43. The at least one non-transitory computer-readable storage medium of claim 39, wherein the user context information is part of an interest network corresponding to information provided by the user.
44. The at least one non-transitory computer-readable storage medium of claim 43, wherein the method further comprises providing the interest network to an analysis engine for deconstruction.
45. Apparatus comprising:
at least one processor; and
at least one storage medium storing processor-executable instructions that, when executed by the at least one processor, perform a method for providing content based on a knowledge representation (KR), the method comprising:
obtaining user context information associated with a user, wherein the user context information includes information regarding an attribute of the user, information regarding an activity of the user, andor information provided by the user;
identifying a group of one or more concepts relevant to the user context by performing a synthesis operation on a user KR, wherein the user KR includes at least one kernel module combined with at least one user-specific customized module;
identifying content information corresponding to the identified group of one or more concepts; and
providing the identified content information to the user.
46. The apparatus of claim 45, wherein the identified content includes documents, audiovisual information, tweets, emails, messages posted on a social networking platform, blog entries, andor any combination thereof.
47. The apparatus of claim 45, wherein the method further comprises ranking the content information provided to the user based on relevance of the content information to the identified group of one or more concepts.
48. The apparatus of claim 45, wherein the method further comprises combining the at least one kernel module with the at least one user-specific customized module, the combining comprising an entity resolution act comprising:
identifying a first concept associated with the kernel module and identifying a second concept associated with the user-specific customized module;
matching the first concept with the second concept based on an identifier andor pattern; and
merging the first concept and the second concept into a third concept.
49. The apparatus of claim 45, wherein the user context information is part of an interest network corresponding to information provided by the user.
50. The apparatus of claim 49, wherein the method further comprises providing the interest network to an analysis engine for deconstruction.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

What is claimed is:

1. A method for preserving rate-distortion information associated with the compression of an input digital image, said method comprising the steps of:
(a) decomposing the input digital image to produce a plurality of subbands, each subband having a plurality of subband coefficients;
(b) quantizing the plurality of subband coefficients of each subband of the decomposed input digital image to produce a quantized output value for each subband coefficient of each subband;
(c) partitioning each subband into a plurality of codeblocks;
(d) forming at least one bit-plane from the quantized output values of subband coefficients of each codeblock of each subband;
(e) entropy encoding each bit-plane of each codeblock for each subband in at least one pass to produce a compressed bit-stream corresponding to each pass, wherein each codeblock is entropy encoded independently of the other codeblocks;
(f) computing a rate value and a distortion-reduction value for each pass;
(g) providing a layer-table that specifies the number of expected layers and the criteria for forming the layers;
(h) using the computed rate and distortion-reduction values to identify a set of passes and their corresponding compressed bit-streams that are included in each layer specified in the layer-table;
(i) producing tagged rate and distortion-reduction tables from the computed rate values and distortion reduction values, wherein the rate values corresponding to passes which are segment boundaries are tagged;
(j) ordering the compressed bit-streams corresponding to passes into layers to produce a compressed digital image file, wherein each layer includes compressed bit-streams corresponding to passes, from the identified set for that layer, that have not been included in any previous layers; and
(k) storing the tagged rate and distortion-reduction tables as rate-distortion information in association with the compressed digital image file.
2. The method according to claim 1 wherein step (k) comprises:
(a) encoding the tagged rate and distortion-reduction tables to produce encoded rate-distortion information, wherein the rate-distortion information comprises rate values and distortion-reduction values for passes contained in the compressed bit-stream; and
(b) associating the encoded rate-distortion information with the compressed digital image.
3. The method according to claim 1 wherein the rate-distortion information comprises rate and distortion-reduction values for all passes contained in the compressed bit-stream.
4. The method according to claim 1 wherein the rate-distortion information comprises distortion-reduction values only for passes contained in the compressed image that are valid truncation points, and rate values only for passes contained in the compressed image that that are valid truncation points but are not segment boundaries.
5. The method according to claim 2 wherein the rate-distortion information is entropy encoded.
6. The method according to claim 1 wherein the rate-distortion information is stored uncoded.
7. The method according to claim 2 wherein the encoded rate-distortion information is stored as metadata contained in the compressed digital image file.
8. The method according to claim 2 wherein the encoded rate-distortion information is stored as a separate file associated with the corresponding compressed digital image file.
9. The method according to claim 1 wherein the compressed digital image is subsequently transcoded to a given bit-rate and resolution, using its associated stored rate and distortion-reduction information, comprising the steps of:
(a) parsing the encoded digital image file to extract the compressed codeblock bit-streams and codeblock segment rates;
(b) extracting the rate and distortion-reduction values for the codeblock passes from the encoded rate-distortion information;
(c) providing a layer-table that specifies the number of expected layers and the criteria for forming the layers;
(d) calculating visual weights based on user-specified viewing condition parameters and quantizer step-sizes for the subbands;
(e) using the extracted rate and distortion-reduction information and the visual weights to identify a set of passes and their corresponding compressed bit-streams that are included in each layer specified in the layer-table;
(f) producing tagged rate and distortion-reduction tables, wherein the rate values corresponding to passes which are segment boundaries are tagged; and
(g) ordering the compressed bit-streams corresponding to passes into layers to produce a transcoded digital image, wherein each layer includes compressed bit-streams corresponding to passes, from the identified set for that layer, that have not been included in any previous layers.
10. The method according to claim 9 further comprising the steps of:
(h) encoding the rate-distortion information to produce recoded rate-distortion information, wherein the rate-distortion information comprises rate values and distortion-reduction values for passes contained in the compressed bit-stream; and
(i) associating the recoded rate-distortion information with the transcoded digital image.
11. The method according to claim 1 wherein the criteria for the formation of layers in the layer-table is specified in terms of maximum allowable rate and resolution.
12. The method according to claim 9 wherein the criteria for the formation of layers in the layer-table is specified in terms of maximum allowable rate and resolution.
13. A method for encoding rate-distortion information associated with the compression of an input digital image, said method comprising the steps of:
(a) performing JPEG2000 compliant compression of the input digital image, wherein a series of compressed coding passes are aggregated in a layer formation process to form layers and wherein rate values and distortion reduction values are computed for each pass and used in the layer formation process to form a compressed bit-stream;
(b) producing tagged rate and distortion-reduction tables from the computed rate values and distortion reduction values, wherein the rate values corresponding to passes which are segment boundaries are tagged;
(c) encoding the tagged rate and distortion-reduction tables to produce encoded rate-distortion information, wherein the rate-distortion information comprises rate values and distortion-reduction values for passes contained in the compressed bit-stream; and
(d) associating the encoded rate-distortion information with the compressed digital image.
14. The method according to claim 13 wherein the compressed digital image is subsequently transcoded using its associated rate and distortion-reduction information.
15. A computer program product for performing the method of claim 1.
16. A computer program product for performing the method of claim 13.
17. The method according to claim 13 further comprising the steps of:
(e) generating additional information relating to the importance of photographed subject and corresponding background regions of the digital image; and
(f) storing the additional information in association with the compressed digital image.
18. The method according to claim 17 wherein step (e) comprises:
(a) generating a main subject belief map containing a continuum of belief values relating to the importance of the subject and background regions in the digital image;
(b) generating an average belief value for each codeblock in the input digital image; and
(c) associating the additional information in the form of the average belief value for each codeblock, with the compressed digital image.
19. The method according to claim 18 wherein the compressed digital image is subsequently transcoded to a given bit-rate and resolution, using its associated stored rate-distortion information, comprising the steps of:
(a) parsing the encoded digital image file to extract the compressed codeblock bit-streams and codeblock segment rates;
(b) extracting the rate and distortion-reduction values for the codeblock passes from the encoded rate-distortion information;
(c) providing a layer-table that specifies the number of expected layers and the criteria for forming the layers;
(d) calculating visual weights based on the additional information in the form of average belief value for each codeblock, user-specified viewing condition parameters, and quantizer step-size for each subband;
(e) using the extracted rate and distortion-reduction information and the visual weights to identify a set of passes and their corresponding compressed bit-streams that are included in each layer specified in the layer-table;
(f) producing tagged rate and distortion-reduction tables, wherein the rate values corresponding to passes which are segment boundaries are tagged; and
(g) ordering the compressed bit-streams corresponding to passes into layers to produce a transcoded digital image, wherein each layer includes compressed bit-streams corresponding to passes, from the identified set for that layer, that have not been included in any previous layers.