1461186402-5c58afa3-7522-4826-bbe4-2a34bf0bfd50

1. A method for detecting interactive inputs, comprising:
concurrently capturing touch input data on a screen of a user device and non-touch gesture input data, the gesture input data being indicative of a gesture performed in an area laterally offset from the screen of the user device;
determining an input command based at least in part on a combination of the concurrently captured touch input data and the non-touch gesture input data; and
affecting an operation of the user device based on the determined input command.
2. The method of claim 1, wherein the touch input data identifies a target item on the screen of the user device, and wherein the input command comprises an adjustment of an aspect of the target item based on the gesture input data.
3. The method of claim 2, wherein the input command comprises a variable adjustment of the target item, the variable adjustment being determined from the gesture input data.
4. The method of claim 1, wherein the capturing the touch input data further comprises receiving a touch, from a user on the screen of the user device, on a desired target item to be affected; and
determining that the target item has been disengaged by detecting a release of the touch on the screen of the user device.
5. The method of claim 1, wherein the capturing the non-touch, gesture input data comprises detecting a location and a movement of an object.
6. The method of claim 5, wherein the movement of the object further comprises a movement substantially in a same plane as the screen of the user device.
7. The method of claim 1, wherein the capturing the non-touch, gesture input data comprises using one or more sensors adapted to detect an object beyond a surface of the user device via ultrasonic technologies, image or video capturing technologies, or IR technologies.
8. The method of claim 1, wherein the determined input command comprises one of a plurality of different types of clicks.
9. The method of claim 8, wherein the different types of clicks further comprise a right-mouse click (RMC) created by a first pose of a hand touching the screen, or a left-mouse click (LMC) or an alternate click created by a second pose of the hand touching the screen, the first pose being different than the second pose.
10. The method of claim 1, wherein the capturing the non-touch, gesture input data comprises capturing a hand pose or a hand motion.
11. A system comprising:
a display configured to display one or more images;
one or more sensors configured to detect touch input data at the display;
one or more sensors configured to detect non-touch gesture input data;
indicative of a gesture performed in an area laterally offset from the display; and
one or more processors configured to:
concurrently capture the touch input data and the non-touch, gesture input data;
determine an input command based at least in part on a combination of the concurrently captured touch input data and the non-touch gesture input data; and
affect an operation of the system based on the determined input command.
12. The system of claim 11, wherein the touch input data identifies a target item on the display, and wherein the input command comprises an adjustment of an aspect of the target item based on the gesture input data.
13. The system of claim 12, wherein the input command comprises a variable adjustment of the target item, the variable adjustment being determined from the gesture input data.
14. The system of claim 11, wherein the processor is further configured to receive a touch from a user on the display on a desired target item and determine that the target item has been disengaged by detecting a release of the touch on the display.
15. The system of claim 11, wherein the one or more sensors configured to detect non-touch gesture input data are further configured to capture a location and a movement of an object.
16. The system of claim 15, wherein the movement of the object further comprises a movement substantially in a same plane as the display.
17. The system of claim 16, wherein the one or more sensors configured to detect non-touch gesture input data comprise ultrasonic sensors, image or video capturing sensors, or IR sensors adapted to capture the non-touch gesture input data.
18. The system of claim 11, wherein the determined input command comprises one of a plurality of different types of clicks.
19. The system of claim 18, wherein the different types of clicks further comprise a right-mouse click (RMC) created by a first pose of a hand touching the display, or a left-mouse click (LMC) or an alternate click created by a second pose of the hand touching the display, the first pose being different than the second pose.
20. The system of claim 11, wherein the one or more sensors configured to detect non-touch gesture input data are configured to capture a hand pose or a hand motion.
21. An apparatus for detecting interactive inputs, comprising:
means for concurrently capturing touch input data on a screen of a user device and non-touch gesture input data, the gesture input data being indicative of a gesture performed in an area laterally offset from the screen of the user device;
means for determining an input command based at least in part on a combination of the concurrently captured touch input data and the non-touch gesture input data; and
means for affecting an operation of the user device based on the determined input command.
22. The apparatus of claim 21, wherein the touch input data identifies a target item on the screen of the user device, and wherein the input command comprises an adjustment of an aspect of the target item based on the gesture input data.
23. The apparatus of claim 22, wherein the input command comprises a variable adjustment of the target item, the variable adjustment being determined from the gesture input data.
24. The apparatus of claim 21, wherein the means for concurrently capturing the touch input data comprises means for receiving a touch, from a user on the screen of the user device, on a desired target item to be affected; and
wherein the apparatus further comprises determining that the target item has been disengaged by detecting a release of the touch on the screen of the user device.
25. The apparatus of claim 21, wherein the means for concurrently capturing the non-touch, gesture input data comprises means for detecting a location and a movement of an object.
26. The apparatus of claim 25, wherein the movement of the object further comprises a movement substantially in a same plane as the screen of the user device.
27. The apparatus of claim 21, wherein the means for concurrently capturing the non-touch, gesture input data comprises one or more sensors adapted to detect an object beyond a surface of the apparatus via ultrasonic technologies, image or video capturing technologies, or IR technologies.
28. The apparatus of claim 21, wherein the determined input command comprises one of a plurality of different types of clicks.
29. The apparatus of claim 28, wherein the different types of clicks further comprise a right-mouse click (RMC) created by a first pose of a hand touching the screen, or a left-mouse click (LMC) or an alternate click created by a second pose of the hand touching the screen, the first pose being different than the second pose.
30. The apparatus of claim 21, wherein the means for concurrently capturing the non-touch, gesture input data comprises means for capturing a hand pose or a hand motion.
31. A non-transitory computer readable medium on which are stored computer readable instructions which, when executed by a processor, cause the processor to:
concurrently capture touch input data on a screen of a user device and non-touch gesture input data, the gesture input data being indicative of a gesture performed in an area laterally offset from the screen of the user device;
determine an input command based at least in part on a combination of the concurrently captured touch input data and the non-touch gesture input data; and
affect an operation of the user device based on the determined input command.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. An organic electro-luminescent device, comprising:
a substrate;
two electrodes disposed on the substrate, one serving as an anode and the other as a cathode; and
an organic electro-luminescent structure interposed between the electrodes, comprising:
a fluorescent emissive layer;
a phosphorescent emissive layer having a host material; and
a nondoped organic material layer interposed between the fluorescent emissive layer and the phosphorescent emissive layer, having a highest occupied molecular orbital energy level lower than that of the host material in the phosphorescent emissive layer.
2. The device as claimed in claim 1, further comprising:
a hole injection layer disposed on the anode;
a hole transport layer interposed between the hole injection layer and the organic electro-luminescent structure; and
an electron transport layer interposed between the cathode and the organic electro-luminescent structure.
3. The device as claimed in claim 1, wherein the nondoped organic material layer has a thickness not exceeding 100 \u212b.
4. The device as claimed in claim 1, wherein the fluorescent emissive layer comprises a blue fluorescent material.
5. The device as claimed in claim 1, wherein the phosphorescent emissive layer comprises green and red phosphorescent materials.
6. The device as claimed in claim 1, wherein the host material in the phosphorescent emissive layer comprises carbazole biphenyl.
7. An organic electro-luminescent device, comprising:
a substrate;
an anode disposed on the substrate;
a fluorescent emissive layer comprising a blue fluorescent material disposed on the anode;
a nondoped organic material layer disposed on the blue fluorescent material layer;
a phosphorescent emissive layer comprising green and red phosphorescent materials disposed on the nondoped organic material layer, using carbazole biphenyl as a host material; and
a cathode disposed on the phosphorescent emissive layer;
wherein the nondoped organic material layer has a highest occupied molecular orbital enemy level lower than that of the host material in the phosphorescent emissive layer.
8. The device as claimed in claim 7, wherein the nondoped organic material layer has a thickness not exceeding 100 \u212b.
9. The device as claimed in claim 7, further comprising:
a hole injection layer interposed between the anode and the fluorescent emissive layer;
a hole transport layer interposed between the hole injection layer and the fluorescent emissive layer; and
an electron transport layer interposed between the cathode and the phosphorescent emissive layer.

1461186392-15880ad7-ff0d-499f-9948-60a3925c4550

1. An image pickup apparatus for use in measuring a distance from an observer to a target, the image pickup apparatus comprising:
a case; and
a plurality of cameras configured to capture images of the target, the cameras being fixed in the case,
wherein the distance from the observer to the target is measured based on the images of the target captured by altering baseline lengths for parallax computation obtained by combining any two of the cameras.
2. The image pickup apparatus as claimed in claim 1,
wherein each of the two cameras is a first compound-eye camera that includes a lens array having a plurality of lenses arranged in a plane and a sensor, and the baseline lengths for the parallax computation include a baseline length obtained by combining the plurality of lenses in the first compound-eye cameras.
3. The image pickup apparatus as claimed in claim 2,
wherein the plurality of cameras further include one of a second compound-eye camera and a noncompound camera that has a lens shape differing from a lens shape of the first compound-eye cameras.
4. The image pickup apparatus as claimed in claim 2,
wherein the sensor provided in the first compound-eye camera is formed by arranging plural sensors arranged on a substrate.
5. The image pickup apparatus as claimed in claim 1,
wherein the plurality of cameras include respective lenses and sensors to have different angles of view and different focus distances.
6. The image pickup apparatus as claimed in claim 5,
wherein at least one of the cameras is a compound-eye camera that includes a lens array having a plurality of the lenses arranged in a plane and the sensor, and the baseline lengths for the parallax computation include a baseline length obtained by combining the plurality lenses in the compound-eye camera.
7. The image pickup apparatus as claimed in claim 5, further comprising:
a pinhole image output unit configured to convert the images captured by the cameras having the respective lenses and the respective sensors into idealized pinhole images to output the idealized pinhole images.
8. The image pickup apparatus as claimed in claim 5, further comprising:
a logic unit configured to carry out enlargement or reduction processing on the images captured by the cameras having the respective lenses and the respective sensors to form the images having a uniform size.
9. The imaging module as claimed in claim 5, wherein
the distance from the observer to the target based on any one of the baseline lengths for the parallax computation obtained by the combination of the cameras is computed utilizing an algorithm.
10. The imaging module as claimed in claim 5,
wherein the baseline lengths for the parallax computation are sequentially computed and the distance from the observer to the target is determined when a desired one of distance ranges is obtained based on a corresponding one of the computed baseline lengths.
11. The imaging module as claimed in claim 5, further comprising:
a storage unit configured to store a plurality of the distances from the observer to the target computed based on the baseline lengths for the parallax computation, such that a desired one of the stored distances from the observer to the target is selected from the storage unit in a process subsequent to a storing process where the distances computed based on the baseline lengths are stored in the storage unit.
12. The imaging module as claimed in claim 5, further comprising:
a storage unit configured to store a plurality of the distances from the observer to the target computed based on the baseline lengths for the parallax computation, such that a desired distance from the observer to the target is computed by averaging the stored distances from the observer to the target computed based on the baseline lengths for the parallax computation stored in the storage unit.
13. The image pickup apparatus as claimed in claim 11,
wherein the desired one of the stored distances selected from the storage unit is a distance computed based on a longest baseline length for the parallax computation.
14. The image pickup apparatus as claimed in claim 5,
wherein the distance from the observer to the target that is urgently measured is obtained by increasing a number of distance computations.
15. The image pickup apparatus as claimed in claim 1,
wherein at least one of the plurality of cameras is arranged in a camera setting direction differing from camera setting directions of the other cameras.
16. The image pickup apparatus as claimed in claim 1, further comprising:
an image separation-integration circuit configured to separate the images or integrate the images; and a parallax process circuit configured to carry out the parallax computation,
wherein the parallax computation carried out on the images acquired by the cameras is performed by the image separation-integration circuit and the parallax process circuit.
17. A rangefinder comprising the image pickup apparatus as claimed in claim 1.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A method for displaying medical information, which displays a medical image in combination with text information in a graphic display device, comprising:
providing text information as a voice presentation when an operator points the text information with a pointing device.
2. A method of displaying medical information according to claim 1, wherein:
the display of only said text information is disabled; and
text information is provided as the voice presentation when an operator points the display area of text information in a state in which the text information is not displayed.
3. A method of displaying medical information according to claim 1, wherein:
said medical image is an image taken by an X-ray CT apparatus.
4. A method of displaying medical information according to claim 1, wherein:
said medical image is an image taken by an MRI apparatus.
5. An apparatus for displaying medical information, which displays a medical image in combination with character information on a graphic display device, comprising:
a display device for providing text information as the voice presentation when an operator points the text information with a pointing device.
6. An apparatus for displaying medical information according to claim 5, further comprising:
a display disabling device for disabling the display of only said text information; and
a display device for providing said text information as the voice presentation when an operator points the display area of said text information with a pointing device in a state in which the text information is not displayed.
7. An apparatus for displaying medical information according to claim 5, wherein:
said medical image is an image taken by an X-ray CT apparatus.
8. An apparatus for displaying medical information according to claim 5, wherein:
said medical image is an image taken by an MRI apparatus.