1460706371-37e96a70-2eab-45e2-b331-985e2e892716

1. Method implemented on a computer having a processor and a memory coupled to said processor for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into blocks,
b, optionally, further dividing individual blocks into smaller blocks,
c, modifying the information content of selected blocks relying on information contained in a neighbouring block or blocks,
d, generating transformed blocks by carrying out on the selected blocks a transformation (DCT) that-converts spatial representation into frequency representation, and finally
e, encoding the information content of the transformed blocks by entropy coding,
characterised by that
i, compressibility analysis is performed on said selected blocks before carrying out the transformation specified in step d, and, depending on the result of the analysis,
ii, steps c, and d, are carried out on the block or
iii, optionally, the block is further partitioned into sub-blocks, and the compressibility analysis specified in step i, is performed again on the blocks resulting from individual partitioning, and
iv, the block partitioning that will potentially yield the best results is chosen relying on results given by steps i and iii, and finally
v, the transformation specified in step d, is carried out using the block partitioning with the best potential results, relying on the prediction specified in step c,

wherein at least some of steps a through e are performed using said processor.
2. The method according to claim 1, characterised by that the compressibility analysis of blocks belonging to individual block partitionings is performed taking into account the content of the blocks andor the frequency of occurrence of individual block types.
3. The method according to claim 1, characterised by that the contents of the blocks are subjected to variance analysis either directly or by way of a Hadamard filter during the compressibility analysis.
4. The method according to claim 3, characterised by that the variance analysis is carried out using the following formula:
variance
=
\u2211

j
=
0

M

\u2062

pixel
j
2

(
\u2211

j
=
0

M

\u2062

pixel
j
)

2
M
where M is the number of elements in the given block or sub-block and pixel (i) is an element of the uncompressed block, with the computed variance value being compared with a given threshold value to establish if the variance exceeds said given threshold value.
5. The method according to claim 1, characterised by further encoding with the entropy coding specific data that are assigned to blocks with the maximum allowed block size in a given frame, the specific data representing the block partitioning of the block they are assigned to (Quadtree).
6. The method according to claim 1, characterised by that discrete cosine transform (DCT) is applied as the transformation that converts the representation in the spatial domain into a representation in the frequency domain.
7. The method according to claim 6, characterised by that DCT is applied on blocks smaller than 16\xd716, and a Hadamard transform is applied on blocks with a size of 16\xd716 pixels.
8. The method according to claim 1, characterised by that the information content of the modified (predicted) blocks is quantified during the compressibility analysis with the following formula:
sum

(
l
)
=
\u2211

i
=
0

M

\u2062
abs
\u2061

(

pixel

(
l
)
)
2
where M is the number of elements in the predicted block, and pixel (i) is an element of the predicted block, with the computed \u201csum\u201d value being compared with a given threshold value or against a former \u201csum\u201d value to establish if the computed \u201csum\u201d value exceeds said given threshold value or said former \u201csum\u201d value.
9. The method according to claim 8, characterised by that during the prediction of individual blocks prediction is carried out using multiple prediction modes, with the prediction mode yielding the lowest\u201csum\u201d value being applied on the given block.
10. The method according to claim 1, characterised by that in case the occurrence count of individual block sizes establishes that the frequency of occurrence of the two most frequently occurring block sizes exceeds a given value, all blocks are replaced with blocks of the two most frequently occurring block sizes.
11. The method according to claim 1, characterised by that an error is computed during the compressibility analysis of blocks, with the blocks contributing to the error above a threshold value being divided into further sub-blocks, taking into account the computed error.
12. The method according to claim 11, characterised by that if the error exceeds a predetermined value in case of a given sub-block, that sub-block is divided into further smaller sub-blocks and the compressibility analysis is performed on the resulting block partitioning which includes the smaller sub-blocks.
13. The method according to claim 1, characterised by that blocks and sub-blocks of sizes of 16\xd716, 8\xd78, 4\xd74 or 2\xd72 are used.
14. Method implemented on a computer having a processor and a memory coupled to said processor for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into two-dimensional blocks,
b, establishing a block partitioning of the frame, in specific cases by dividing individual blocks into further sub-blocks,
c, carrying out on the information content of blocks a transformation (DCT) that converts spatial representation into frequency representation, producing thereby transformed multiple-element two-dimensional blocks and
d, modifying the elements of the transformed blocks according to external boundary conditions, and finally
e, encoding the information contained in transformed blocks by entropy coding,
characterised by that
in step d, the modification of the data in transformed multiple-element-two-dimensional blocks is modified depending on the size of the blocks and on the bandwidth available for transmitting coded data, and
wherein at least some of steps a through e are performed using said processor.
15. The method according to claim 14, characterised by that the modification of transformed blocks is a quantization.
16. The method according to claim 15, characterised by that the quantization is an MPEG quantization, according to the following function:
qcoeff

(
j
)
=
(
(
data

(
j
)
*
16

)

+

(
matrix

(
j
)
*
0.5

)
matrix

(
j
)
*

(
2
17
QP
*
2
+

)
)

\u2062

2
17
where
qcoeff (j) is an element of the transformed multiple-element two-dimensional block,
matrix (j) is an element of a matrix corresponding in size to the transformed multiple-element two-dimensional block,
QP is the quantization factor.
17. The method according to claim 16, characterised by that values of matrix (j) are taken from an empirically established matrix table, where individual elements of the table are entire matrix (j) matrices, with selection from said table being performed according to the external boundary condition specified in step d.
18. The method according to claim 17, characterised by that selection from the table is performed with respect to the value of the QP quantization factor.
19. The method according to claim 16, characterised by that the entire QP domain is divided into N subdomains with matrix tables being assigned to individual subdomains, where the size of said matrix tables corresponds to the block size, with each subdomain being assigned to a previously specified bandwidth range.
20. The method according to claim 16, characterised by that the external boundary condition of step d, is placed by available storage capacity andor the available bandwidth.
21. The method according to claim 14, characterised by that in specific cases the information content of the selected blocks is modified prior to the transformation carried out in step c, on the basis of the information contained in previously selected image elements of a neighbouring block or blocks or the information content of a reference block included in a reference frame.
22. The method according to claim 14, characterised by that for encoding intra frames steps of the method according to method for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into blocks,
b, optionally, further dividing individual blocks into smaller blocks,
c, modifying the information content of selected blocks relying on information contained in a neighbouring block or blocks,
d, generating transformed blocks by carrying out on the selected blocks a transformation (DCT) that-converts spatial representation into frequency representation, and finally
e, encoding the information content of the transformed blocks by entropy coding,
characterised by that
i, compressibility analysis is performed on said selected blocks before carrying out the transformation specified in step d, and, depending on the result of the analysis
ii, steps c, and d, are carried out on the block or
iii, optionally, the block is further partitioned into sub-blocks, and the compressibility analysis specified in step i, is performed again on the blocks resulting from individual partitioning, and
iv, the block partitioning that will potentially yield the best results is chosen relying on results given by steps i and iii, and finally
v, the transformation specified in step d, is carried out using the block partitioning with the best potential results, relying on the prediction specified in step c are also carried out.
23. Method implemented on a computer having a processor and a memory coupled to said processor for compressing a digitally coded video frame sequence, where the information content of certain frames is encoded from the contents of the preceding or subsequent frames (reference frames), the method further comprising the steps of
a, dividing the frame to be encoded into blocks,
b, searching a matching reference block for the given block to be encoded in the reference image preceding or following the frame containing said block to be encoded,
c, carrying out a compressibility analysis by comparing matched reference blocks and the block to be encoded,
d, selecting the best reference block relying on the result of the compressibility analysis, and
e, encoding said block using the best reference block just selected,
characterised by that in step b, during the search for reference blocks:
i) the block to be encoded is divided into sub-blocks,
ii) the contents of the sub-blocks are analysed,
iii) according to pre-defined criteria, a predetermined number of sub-blocks, preferably at least two, are selected,
iv) a reference block search is performed using the selected sub-blocks, said search being performed in a specific search range in the selected reference frame for the reference block containing sub-blocks that differ the least from the selected sub-blocks, with the relative position of the selected blocks kept constant during said search, and
v) the best reference block is chosen as a result of a search using the selected sub-blocks,

wherein at least some of steps a through e are performed using said processor.
24. The method according to claim 23, characterised by that in step v) the best reference block is chosen in such a way that every time the search finds a block that is better than the current reference block, position data of the newly found block are written into a multiple-element circular buffer, with the last element of the buffer containing the position of the best sub-block.
25. The method according to claim 23, characterised by that a reference search is carried out using the entire block to be coded, and the search being performed in the vicinity of the reference block that is considered as the best reference block, and the final reference block is chosen according to the result of said search performed using the entire block to be coded.
26. The method according to claim 23, characterised by determining the absolute square difference of the block to be coded and the reference block, and deciding about the acceptability of the reference block on the basis of the determined difference.
27. The method according to claim 23, characterised by that the reference block search is performed in a filtered reference frame.
28. The method according to claim 23, wherein if the results are still not satisfactory, reference block search is carried out in search ranges located in further reference frames.
29. The method according to claim 23, characterised by that in case the search is unsuccessful in all reference frames, the block to be coded is divided into sub-blocks, with a matching reference sub-block being searched for each sub-block, said search being performed in the vicinity of the reference frame positions that are considered the best, in that reference frame which has so far yielded the best results.
30. The method according to claim 29, characterised by that in case dividing the block to be coded into sub-blocks has not produced satisfactory results, the search for reference sub-blocks is carried on in the vicinity of the best positions of other reference frames.
31. The method according to claim 29, characterised by that in case a sub-block remained erroneous, the erroneous sub-block is further divided into smaller sub-blocks, and the search is repeated.
32. The method according to claim 23, characterised by that the block to be coded is subtracted from the reference block, and the difference block is encoded in step e.
33. The method according to claim 23, characterised by carrying out on the information content of the difference block a transformation (DCT or Hadamard transform) that converts spatial representation into frequency representation, producing thereby transformed multiple-element two-dimensional blocks (matrices of DCT or Hadamard coefficients), and encoding the information content of the transformed blocks by entropy coding.
34. The method according to claim 23, characterised by that steps of the method for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into blocks,
b, optionally, further dividing individual blocks into smaller blocks,
c, modifying the information content of selected blocks relying on information contained in a neighbouring block or blocks,
d, generating transformed blocks by carrying out on the selected blocks a transformation (DCT) that-converts spatial representation into frequency representation, and finally
e, encoding the information content of the transformed blocks by entropy coding,
characterised by that
i, compressibility analysis is performed on said selected blocks before carrying out the transformation specified in step d, and, depending on the result of the analysis
ii, steps c, and d, are carried out on the block or
iii, optionally, the block is further partitioned into sub-blocks, and the compressibility analysis specified in step i, is performed again on the blocks resulting from individual partitioning, and
iv, the block partitioning that will potentially yield the best results is chosen relying on results given by steps i and iii, and finally
v, the transformation specified in step d, is carried out using the block partitioning with the best potential results, relying on the prediction specified in step c are also carried out during the process of encoding.
35. Method implemented on a computer having a processor and a memory coupled to said processor for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing each frame into blocks that are to be separately coded,
b, carrying out on the information content of the blocks a transformation (DCT) that converts spatial representation into frequency representation, producing thereby transformed blocks, and finally
c, encoding the information contained in transformed blocks by entropy coding, and applying arithmetic coding as entropy coding, during which
a bit sequence is encoded by modifying the lower and upper limit of an interval as a function of values of consecutive bits of the bit sequence, and
the distribution of the already arrived bits of the sequence is taken into account in the function that modifies the limits of said interval,
characterised by that
addresses are generated from already arrived bit values of the bit sequence,
said addresses are applied for addressing individual processing elements of a neural network comprising multiple processing elements, and
parameters of the processing element are modified such that the frequency of individual addressing operations and the value of the currently arriving bit of the bit sequence are used as input data, and the output of the neural network is applied for determining a parameter that modifies the lower or upper limit the interval,
after an initial learning phase involving the processing of multiple bits, the upper or lower limits of the interval being determined during the encoding of incoming bits as a function of the output of the neural network,

wherein at least some of steps a through c are performed using said processor.
36. The method according to claim 35, characterised by that the incoming bit sequence to be encoded is fed into a buffer, and divided into multiple shorter bit sequences.
37. The method according to claim 36, characterised by that the binary value represented by the bits of the shorter bit sequences is regarded as an address.
38. The method according to claim 35, characterised by the addresses are being used for selecting rows of a table, where said table contains function values representing the frequencies of occurrence of the possible values of the current bit to be coded, as well as at least one weight function.
39. The method according to claim 38, characterised by that the weight functions of individual neurons are modified as a function of the function values representing the occurrence frequencies of the potential values of the bit to be coded.
40. The method according to claim 35, characterised by that potential address ranges of the addresses form unions with one another are at least partially overlapping.
41. The method according to claim 35, characterised by that the gain and the learning rate of the neural network are dynamically adjusted according to the boundary conditions.
42. The method according to claim 35, characterised by that the encoder is used with different levels, where parameters of each level can be adjusted separately, with a neural network operating with dedicated parameters being assigned to each level.
43. The method according to claim 35, characterised by that steps of the method according to for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into blocks,
b, optionally, further dividing individual blocks into smaller blocks,
c, modifying the information content of selected blocks relying on information contained in a neighbouring block or blocks,
d, generating transformed blocks by carrying out on the selected blocks a transformation (DCT) that-converts spatial representation into frequency representation, and finally
e, encoding the information content of the transformed blocks by entropy coding,
characterised by that
i, compressibility analysis is performed on said selected blocks before carrying out the transformation specified in step d, and, depending on the result of the analysis

ii, steps c, (prediction) and d, are carried out on the block or
iii, optionally, the block is further partitioned into sub-blocks, and the compressibility analysis specified in step i, is performed again on the blocks resulting from individual partitioning, and
iv, the block partitioning that will potentially yield the best results is chosen relying on results given by steps i and iii, and finally
v, the transformation specified in step d, is carried out using the block partitioning with the best potential results, relying on the prediction specified in step c are also carried out during the process of encoding.
44. Method implemented on a computer having a processor and a memory coupled to said processor for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into two-dimensional blocks,
b, carrying out on the information content of blocks a transformation (DCT) that converts spatial representation into frequency representation, producing thereby transformed multiple-element two-dimensional blocks and
c, modifying the elements of the transformed blocks according to external boundary conditions, and finally
d, encoding the information contained in transformed blocks by entropy coding,
characterised by that
modification of the data of the transformed multiple-element two-dimensional blocks is carried out in step c, as a function of the output of a neural network,

wherein at least some of steps a through d are performed using said processor.
45. The method according to claim 44, characterised by that the neural network has back propagation or counter propagation structure, or is a simple network composed of multiple neurons, where,
normalized values of expectedcoded length and, expectedcoded quality are used as input data,
a specific number of previously received input data and the current input data are stored in a time window (time slot), with the data contained in the time window being assigned to the input neurons of the neural network.
46. The method according to claim 45, characterised by that the number of neurons in the input layer of the network equals the number of data elements stored in the time window.
47. The method according to claim 46, characterised by that the network comprises a hidden layer.
48. The method according to claim 47, characterised by that the number of neurons in the hidden layer is larger than the number of neurons in the input layer.
49. The method according to claim 44, characterised by that normalized expectedcoded length values and expectedcoded quality values are applied as input data,
a predetermined number (N) of previously received input data elements (preferably N=31 or N=63) are stored in a time window together with the current input data, and generating addresses based on the data contained in the time slot,
input data are rounded off to a given bit length for the address generation process,
an address is generated from each element of the time window by means of a hash function, said address pointing to an element of a table corresponding to one of the processing elements of the network.
50. The method according to claim 49, characterised by that the neural network is pre-trained utilizing the expectedcoded length and quality data in their original form, before they are rounded off for address generation.
51. The method according to claim 49, characterised by that addresses are generated by means of a hash function from the data contained in the time window.
52. The method according to claim 44, characterised by that minimum and maximum allowed bandwidth values are applied as input data for two processing elements that are independent from the rest of the neural network.
53. The method according to claim 52, characterised by that results generated by the processing elements of the network and the two independent processing elements appear at two outputs.
54. The method according to claim 52, characterised by that the output of the neural network is a frame size scaling factor andor a quantization factor.
55. The method according to claim 52, characterised by that the output of the neural network is a frame size scaling factor andor a quantization factor.
56. The method according to claim 44, characterised by that steps of the method according to for compressing a digitally coded video frame sequence, comprising the steps of
a, dividing a given frame into blocks,
b, optionally, further dividing individual blocks into smaller blocks,
c, modifying the information content of selected blocks relying on information contained in a neighbouring block or blocks,
d, generating transformed blocks by carrying out on the selected blocks a transformation (DCT) that-converts spatial representation into frequency representation, and finally
e, encoding the information content of the transformed blocks by entropy coding,
characterised by that
i, compressibility analysis is performed on said selected blocks before carrying out the transformation specified in step d, and, depending on the result of the analysis
ii, steps c, (prediction) and d, are carried out on the block or
iii, optionally, the block is further partitioned into sub-blocks, and the compressibility analysis specified in step i, is performed again on the blocks resulting from individual partitioning, and
iv, the block partitioning that will potentially yield the best results is chosen relying on results given by steps i and iii, and finally
v, the transformation specified in step d, is carried out using the block partitioning with the best potential results, relying on the prediction specified in step c are also carried out during the coding process.
57. Apparatus for encoding digital video data, characterised by that comprising a unit adapted for performing the steps of the method according to claim 1.
58. A non-transitory computer-readable medium storing a program containing instructions which, when executed by at least one processor, causes the processor to perform the steps of the method of claim 1.
59. A transmitter, comprising a processor operable to generate a coded sequence by performing the compression method according to claim 1.
60. Method for decompressing encoded video data from a coded sequence produced by the compression method according to claim 1.

The claims below are in addition to those above.
All refrences to claims which appear below refer to the numbering after this setence.

1. A system for thermoplastic induction welding of polymer pipes, comprising:
a flexible ferromagnetic strip configured to fit within a coupling of a polymer pipe;
an electromagnetic induction coil configured to surround said coupling; and
a power supply electrically connectable to said electromagnetic induction coil and configured to cause said coil to generate an electromagnetic field upon energizing thereof by said power supply;
whereby a flow of current is induced in said ferromagnetic strip to heat said strip, resulting in thermoplastic welding of separate polymer pipe segments located within said coupling.
2. The system as set forth in claim 1, wherein said flexible ferromagnetic strip comprises a polymer binder combined with at least one ferromagnetic material.
3. The system as set forth in claim 2, wherein said ferromagnetic material comprises strontium ferrite.
4. The system as set forth in claim 3, wherein said ferromagnetic material has a strontium ferrite loading of greater than 30 vol %.
5. The system as set forth in claim 4, wherein said ferromagnetic material has a strontium ferrite loading of greater than or equal to about 50 vol %.
6. The system as set forth in claim 2, wherein said polymer binder comprises low-density polyethylene (LDPE).
7. The system as set forth in claim 2, wherein said flexible ferromagnetic strip further comprises a surfactant.
8. The system as set forth in claim 7, wherein said surfactant comprises an organic surfactant.
9. The system as set forth in claim 8, wherein said organic surfactant comprises a polyester resin.
10. The system as set forth in claim 7, wherein said surfactant comprises stearic acid.
11. The system as set forth in claim 7, wherein said surfactant comprises calcium stearate.
12. The system as set forth in claim 7, wherein said surfactant comprises at least one tackifying resin.
13. The system as set forth in claim 12, wherein said surfactant comprises at least one abietic acid ester.
14. The system as set forth in claim 12, wherein said surfactant comprises at least one rosin ester.
15. The system as set forth in claim 12, wherein said surfactant comprises at least one terpene phenolic resin.
16. The system of claim 7, wherein said surfactant comprises at least one styrene-acrylate copolymer.
17. The system of claim 7, wherein said surfactant comprises at least one acrylate copolymer.
18. A flexible ferromagnetic strip configured to fit within a coupling of a polymer pipe to cause thermoplastic induction welding of separate polymer pipe segments within said coupling upon heating of said strip by electromagnetic induction current, said ferromagnetic strip comprising a polymer binder combined with at least one ferromagnetic material.
19. The flexible ferromagnetic strip as set forth in claim 18, wherein said ferromagnetic material comprises strontium ferrite.
20. The flexible ferromagnetic strip as set forth in claim 19, wherein said ferromagnetic material has a strontium ferrite loading of greater than 30 vol %.
21. The flexible ferromagnetic strip as set forth in claim 20, wherein said ferromagnetic material has a strontium ferrite loading of greater than or equal to about 50 vol %.
22. The flexible ferromagnetic strip as set forth in claim 18, wherein said polymer binder comprises low-density polyethylene (LDPE).
23. The flexible ferromagnetic strip as set forth in claim 18, wherein said flexible ferromagnetic strip further comprises at least one surfactant.
24. The flexible ferromagnetic strip as set forth in claim 23, wherein said surfactant comprises an organic surfactant.
25. The flexible ferromagnetic strip as set forth in claim 24, wherein said organic surfactant comprises a polyester resin.
26. The flexible ferromagnetic strip as set forth in claim 23, wherein said surfactant comprises stearic acid.
27. The flexible ferromagnetic strip as set forth in claim 23, wherein said surfactant comprises calcium stearate.
28. The flexible ferromagnetic strip as set forth in claim 23, wherein said surfactant comprises at least one tackifying resin.
29. The flexible ferromagnetic strip as set forth in claim 28, wherein said surfactant comprises at least one abietic acid ester.
30. The flexible ferromagnetic strip as set forth in claim 28, wherein said surfactant comprises at least one rosin ester.
31. The flexible ferromagnetic strip as set forth in claim 28, wherein said surfactant comprises at least one terpene phenolic resin.
32. The flexible ferromagnetic strip of claim 23, wherein said surfactant comprises at least one styrene-acrylate copolymer.
33. The flexible ferromagnetic strip of claim 23, wherein said surfactant comprises at least one acrylate copolymer.

1460706368-931ae274-2ed0-464f-b768-3966bcc53b79

1. A method of producing an audiovideo recording, comprising the steps of:
receiving an input video image at a portable audiovideo recording device;
converting the input video image into a digital production format by sampling the input program at a sampling frequency in excess of 18 megahertz;
providing a high-capacity digital video storage equipped with an asynchronous program recording and reproducing capability performing a frame-rate conversion; and
processing the recorded video program in the production format using the high-capacity video storage on a selective basis outputting a version of the video program having a desired frame rate and image dimensions in pixels.
2. The method of claim 1, wherein the portable audiovideo recording device is a camcorder.
3. The method of claim 1, wherein the portable audiovideo recording device is a portable communication device.
4. A portable audiovideo recording device, comprising:
an image sensor, having pixel dimensions of at least 1024\xd7576;
a graphics processor converting the input video image into a digital production format by sampling the input program at a sampling frequency in excess of 18 megahertz;
a high-capacity digital video storage equipped with an asynchronous program recording and reproducing capability and performing a frame-rate conversion; and
output circuitry processing the recorded video program in the production format using the high-capacity video storage on a selective basis to output a version of the video program having a desired frame rate and image dimensions in pixels.
5. The device of claim 4, wherein the portable audiovideo recording device is a camcorder.
6. The device of claim 4, wherein the portable audiovideo recording device is a portable communication device.

The claims below are in addition to those above.
All refrences to claims which appear below refer to the numbering after this setence.

1. An image reading apparatus comprising:
a reading unit including a movable sensor;
a first reading mechanism which moves said reading unit to read an original placed on a platen;
a second reading mechanism which conveys the original relative to said reading unit to read the original;
a common driving source which drives said first reading mechanism and said second reading mechanism;
a transfer unit which transfers a driving force from said driving source to said first reading mechanism and said second reading mechanism; and
a switching unit which switches rotation used to drive said second reading mechanism to one of forward rotation and reverse rotation with respect to rotation of said driving source in one direction, said switching unit operating as said reading unit moves.
2. The apparatus according to claim 1, wherein said switching unit operates in accordance with a moving amount and a moving direction of said reading unit.
3. The apparatus according to claim 1, further comprising a recognition unit switched by said switching unit, said recognition unit being read by said reading unit to identify the forward rotation and the reverse rotation of said second reading mechanism.
4. The apparatus according to claim 3, wherein said recognition unit is disposed between a position where said reading unit operates said switching unit, and a position of said reading unit when the original is conveyed and read.
5. The apparatus according to claim 1, wherein a reference plate configured to correct a position of said reading unit in a moving direction, or perform color correction in processing a read image is disposed between a position of said reading unit when said reading unit is moved to read the original, and a position of said reading unit when the original is conveyed and read.