1. A computer implemented method of determining extrinsic parameters for a vehicle vision system, the method comprising:
processing an image of a road captured by a camera to identify a first road lane marking and a second road lane marking in the image, where the first road lane marking and the second road lane marking are two road lane markings that extend parallel to each other;
determining, for at least one of the two road lane markings:
a first set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in an image plane; and
a second set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in a road plane, where the second set of parameters are determined based on information related to spacing of the two road lane markings;
identifying a linear transformation which, for at least one of the two road lane markings, defines a mapping between the first set of parameters and the second set of parameters; and
establishing the extrinsic parameters based on the identified linear transformation, where the established extrinsic parameters include an orientation of the camera with respect to a vehicle coordinate system.
2. The method of claim 1, where the identifying the linear transformation comprises determining matrix elements of a homography matrix.
3. The method of claim 2, where the matrix elements of the homography matrix are determined such that
HT{right arrow over (p\u2032)}i
approximates {right arrow over (p)}i for i=1, . . . , M, where i denotes a road lane marking identifier, M denotes a count of the two road lane markings, H denotes the homography matrix that comprises three rows and three columns, T denotes the matrix transposition, {right arrow over (p\u2032)}i denotes a 3-tuple of the first set of parameters identified for an ith road lane marking, and {right arrow over (p)}i denotes a 3-tuple of the second set of parameters identified for the ith road lane marking.
4. The method of claim 2, where the determining the first set of parameters comprises determining parameter values a\u2032i, b\u2032i and c\u2032i, such that the line along which the at least one of the two road lane markings extends in the image plane is defined by
a\u2032i\xb7x\u2032+b\u2032i\xb7y\u2032+c\u2032i=0,
where i denotes a road lane marking identifier and x\u2032 and y\u2032 denote coordinates in the image plane.
5. The method of claim 4, where the determining the second set of parameters comprises determining parameter values ai, bi and ci, such that the line along which the at least one of the two road lane markings extends in the road plane is defined by
ai\xb7x+bi\xb7y+ci=0,
where x and y denote coordinates in the road plane.
6. The method of claim 5, where the matrix elements of the homography matrix are determined based on
N
\ue8a0
H
T
\ue8a0
(
a
i
\u2032
b
i
\u2032
c
i
\u2032
)
–
(
a
i
b
i
c
i
)
,
for i=1, . . . , M, where N\u2022 denotes a vector norm, where i denotes a road lane marking identifier, M denotes a count of the two road lane markings, H denotes the homography matrix that comprises three rows and three columns, and T denotes a matrix transposition.
7. The method of claim 5, where the matrix elements of the homography matrix may be determined such that
\u2211
i
=
1
M
\ue89e
\uf605
H
T
\ue8a0
(
a
i
\u2032
b
i
\u2032
c
i
\u2032
)
–
(
a
i
b
i
c
i
)
\uf606
2
is minimized.
8. The method of claim 5, where, for at least one of the two road lane markings, one of the parameter values ai or bi is set equal to zero, and a quotient of ci and the other one of the parameter values ai or bi is set based on the information related to spacing of the two road lane markings.
9. The method of claim 8, where, for at least one of the two road lane markings, the quotient of ci and the other one of the parameter values ai or bi is set to
(2\xb7k(i)+1)\xb7l2+s,
where k(i) denotes an integer number that depends on the road lane marking identifier, l denotes a respective road lane width, and s denotes an offset from a middle longitudinal axis of the road lane that is independent of the road lane marking identifier.
10. The method of claim 1, where the processing the image includes identifying a third road lane marking that extends parallel to the first and the second road lane markings.
11. The method of claim 1, where a count M of the two road lane markings and a count Me of extrinsic parameters that are to be determined fulfil Me\u22662\xb7M.
12. The method of claim 1, where the determining the second set of parameters comprises determining a vehicle position and retrieving information on a road lane width from a map database based on the determined vehicle position.
13. The method of claim 1, where, for at least one of the two road lane markings, the line along which the road lane marking extends in the image plane is determined for multiple images of the road, and parameters defining orientations and positions for two or more of the multiple images of the road are averaged to determine the first set of parameters.
14. A calibration system for determining extrinsic parameters of a vehicle vision system comprising:
a processor;
a non-transient memory device in communication with the processor;
processor executable instructions stored in the memory device that when executed by the processor are configure to:
process data regarding a road sensed by a sensor to identify a first road lane marking and a second road lane marking, where the first road lane marking and the second road lane marking are two road lane markings that extend parallel to each other;
determine, for at least one of the two road lane markings:
a first set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in an image plane; and
a second set of parameters defining an orientation and position of a line along which the at least one of the two road lane markings extends in a road plane, where the second set of parameters are determined based on information related to spacing of the two road lane markings;
identify a linear transformation which, for at least one of the two road lane markings, defines a mapping between the first set of parameters and the second set of parameters; and
establish the extrinsic parameters based on the identified linear transformation, where the established extrinsic parameters include an orientation of the sensor with respect to a vehicle coordinate system.
15. The system of claim 14, where a homography matrix defines the linear transformation.
16. The system of claim 14, where a bilinear equation defines coordinates of the road plane.
17. The system of claim 14, where a bilinear equation defines coordinates of the image plane.
18. The system of claim 14, where the orientation of the sensor comprises a yaw angle, a pitch angle, and roll angle.
19. A computer implemented method of determining extrinsic parameters for a vehicle vision system, the method comprising:
processing an image of a road captured by a camera to identify a first road lane marking and a second road lane marking in the image, where the first road lane marking and the second road lane marking are two road lane markings that extend parallel to each other;
determining, for at least one of the two road lane markings:
a first set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in an image plane; and
a second set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in a road plane, where the second set of parameters are determined based on information related to spacing of the two road lane markings;
identifying, by at least determining matrix elements of a homography matrix, a linear transformation which, for at least one of the two road lane markings, defines a mapping between the first set of parameters and the second set of parameters; and
establishing the extrinsic parameters based on the identified linear transformation, where the established extrinsic parameters include an orientation of the camera.
20. The method of claim 19,
where a first bilinear equation defines coordinates of the road plane and a second bilinear equation defines coordinates of the image plane,
where, for at least one of the two road lane markings, a first parameter of the first bilinear equation is set equal to zero, and a quotient and a second parameter of the first bilinear equation is set based on information related to spacing of the two road lane markings,
where, for at least one of the two road lane markings, the quotient and the second parameter is set to
(2\xb7k(i)+1)\xb7l2+s,
and
where k(i) denotes an integer number that depends on a road lane marking identifier, l denotes a respective road lane width, and s denotes an offset from a middle longitudinal axis of a respective road lane that is independent of the road lane marking identifier.
The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.
1. A method operable on a computer that is coupled to a memory containing video data of a prescribed physical area, the method responsive to the movement of objects in the prescribed physical area, the method comprising:
identifying objects in the prescribed physical area by querying the video data, wherein the objects that are identified are moving in the prescribed physical area;
specifying by a user, a direction arrow completely within the prescribed physical area, wherein the direction arrow is superimposed on a user-selected portion of an image of the prescribed physical area;
determining a number of objects that move in a direction of the specified direction arrow; and
generating a visually perceptible output of the number of objects that move in the direction of the specified direction arrow, such that a movement path of each of the respective numbers of objects is superimposed on the user-selected portion of the image of the prescribed physical area.
2. The method according to claim 1, wherein the act of generating the visually perceptible output of the number of objects further comprises:
generating still image of the prescribed physical area; and
superimposing a representation of the number of objects over the still image.
3. The method according to claim 2, wherein the act of superimposing the representation of the number of objects over the still image further comprises:
plotting the movements of the number of objects that moved in the direction of the specified direction arrow in the prescribed physical area.
4. The method according to claim 1, further comprising:
specifying a virtual line in the prescribed physical area,
wherein the specified direction arrow is specified with respect to the virtual line.
5. The method according to claim 1, further comprising:
defining a region of interest in the prescribed physical area,
wherein only objects that move in the specified direction arrow in the region of interest are determined.
6. A method operable on a computer to detect and analyze movement of objects in a prescribed physical area, the method comprising:
receiving video data of the prescribed physical area from at least one camera;
detecting movement of objects in the prescribed physical area;
generating video metadata in response to the detection act, the video metadata representing the detected objects and their movement in the prescribed physical area;
storing the video metadata in a database;
specifying by a user, a direction arrow completely within the prescribed physical area, wherein the direction arrow is superimposed on a user-selected portion of an image of the prescribed physical area;
determining a number of objects that move in a direction of the specified direction arrow; and
generating a visually perceptible output of the number of objects, objects that move in the direction of the specified direction arrow, such that a movement path of each of the respective numbers of objects is superimposed on the user-selected portion of the image of the prescribed physical area.
7. A system for detecting and analyzing movement of objects in a prescribed physical area, the system comprising:
an interface that receives video data of the prescribed physical area;
a video management system coupled to the interface, the video management system storing the video data;
a video analytics engine coupled to the video management system, the video analytics engine generating video metadata representing at least one object and its movement in the prescribed physical area;
an object movement analytics engine coupled to the video analytics engine, the object movement analytics engine determining a number of objects that move in a direction of a specified direction arrow completely within the prescribed physical area, wherein the direction arrow is superimposed by a user on a user-selected portion of an image of the prescribed physical area; and
a display coupled to the object movement analytics engine, the display displaying a visually perceptible output of the number of objects, objects that move in the direction of the specified direction arrow, such that a movement path of each of the respective numbers of objects is superimposed on the user-selected portion of the image of the prescribed physical area.
8. The system of claim 7, wherein the interface receives video data from a plurality of cameras corresponding to a plurality of prescribed physical areas.