1461180930-0f76664e-9f50-473a-a3be-349447000a1c

1-12. (canceled)
13. A column select device configured to select one or more target columns of bit cells within which a write operation is to be performed, the column select device comprising:
a first power supply rail having a first voltage level;
a second power supply rail having a second voltage level lower than the first voltage level;
voltage boosting circuitry configured to:
generate a column select signal with a selected voltage level, wherein the column select signal with the selected voltage level is supplied to one or more column select transistors of the one or more target columns; and
generate a column select signal with an unselected voltage level, wherein the column select signal with the unselected voltage level is supplied to one or more column select transistors of one or more unselected columns other than the one or more target columns, and wherein at least one of the selected voltage level and the unselected voltage level is outside of a voltage range between the first voltage level and the second voltage level.
14. The column select device of claim 13, wherein the column select transistors are NMOS transistors.
15. The column select device of claim 13, wherein the unselected voltage level holds the one or more column select transistors of the one or more unselected columns in a high impedance state.
16. The column select device of claim 13, wherein the selected voltage level holds the one or more column select transistors of the one or more target columns in a low impedance state.
17. The column select device of claim 13, wherein the unselected voltage level is lower than the second voltage level and the selected voltage level is higher than said first voltage level.
18. The column select device of claim 13, wherein the voltage boosting circuitry comprises:
a select charge pump associated with the selected voltage level; and
an unselect charge pump associated with the unselected voltage level.
19. The column select device of claim 13, wherein the bits cells are 6 T bit cells.
20. An apparatus configured to select one or more target columns of bit cells within which a write operation is to be performed, the apparatus comprising:
means for providing a first voltage level;
means for providing a second voltage level that is lower than the first voltage level;
means for generating a column select signal with a selected voltage level, wherein the column select signal with the selected voltage level is supplied to one or more column select transistors of the one or more target columns; and
means for generating a column select signal with an unselected voltage level, wherein the column select signal with the unselected voltage level is supplied to one or more column select transistors of one or more unselected columns other than the one or more target columns, and wherein at least one of the selected voltage level and the unselected voltage level is outside of a voltage range between the first voltage level and the second voltage level.
21. The apparatus of claim 20, wherein the column select transistors are NMOS transistors.
22. The apparatus of claim 20, wherein the unselected voltage level holds the one or more column select transistors of the one or more unselected columns in a high impedance state.
23. The apparatus of claim 20, wherein the selected voltage level holds the one or more column select transistors of the one or more target columns in a low impedance state.
24. The apparatus of claim 20, wherein the unselected voltage level is lower than the second voltage level and the selected voltage level is higher than said first voltage level.
25. The apparatus of claim 20, wherein the means for generating the column select signal with the selected voltage level comprises a select charge pump associated with the selected voltage level, and wherein the means for generating the column select signal with the unselected voltage level comprises an unselect charge pump associated with the unselected voltage level.
26. The apparatus of claim 20, wherein the bits cells are 6 T bit cells.
27. A method of selecting one or more target columns of bit cells within which a write operation is to be performed, the method comprising:
generating a column select signal with a selected voltage level and a column select signal with an unselected voltage level;
supplying the column select signal with the selected voltage level to one or more column select transistors of the one or more target columns; and
supplying the column select signal with the unselected voltage level to one or more column select transistors of one or more unselected columns other than the one or more target columns, and wherein at least one of the selected voltage level and the unselected voltage level is outside of a voltage range between a first voltage level and a second voltage level.
28. The method of claim 27, wherein the column select transistors are NMOS transistors.
29. The method of claim 27, wherein the unselected voltage level holds the one or more column select transistors of the one or more unselected columns in a high impedance state.
30. The method of claim 27, wherein the selected voltage level holds the one or more column select transistors of the one or more target columns in a low impedance state.
31. The method of claim 27, wherein the unselected voltage level is lower than the second voltage level and the selected voltage level is higher than said first voltage level.
32. The method of claim 27, wherein generating the column select signal with the selected voltage level is associated with a select charge pump, and wherein generating the column select signal with the unselected voltage level is associated with an unselect charge pump.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. An alert apparatus comprising:
a digital processor;
an accelerometer coupled to the digital processor;
wireless pairing circuitry for pairing wirelessly with a smartphone, coupled to the digital processor;
software executable on the digital processor from a non-transitory medium;
an onboard diagnostics (OBD) connector coupled to the digital processor, enabling monitoring of an OBD system of an automobile; and
stored criteria for determining if an accident has occurred;
wherein the apparatus monitors the OBD system of the automobile for specific information, monitors the accelerometer for deceleration exceeding a preset value, updates position of the automobile periodically, compares real-time data with the stored criteria, and, in the event of a determination that an accident has occurred, transmits an alert signal to the cellular telephone via the wireless pairing circuitry.
2. The apparatus of claim 1 further comprising Global Positioning System (GPS) circuitry coupled to the digital processor, wherein the apparatus transmits global position to the cellular telephone along with the alert signal.
3. The apparatus of claim 1 wherein the accelerometer is capable of determining changes in orientation that may indicate the automobile has rolled over, wherein the fact of rollover is transmitted to the cellular telephone along with the alert signal in the event of an accident.
4. The apparatus of claim 1 wherein deployment of one or more airbags is determined from the OBD system, and the facts of air bag deployment are transmitted to the cellular telephone along with the alert signal.
5. The apparatus of claim 1 wherein the determination that an accident has occurred derives from any single event of rollover, deceleration beyond the preset value, or deployment of one or more airbags.
6. The apparatus of claim 1 wherein the wireless pairing circuitry is Bluetooth\u2122 protocol compatible.
7. A smartphone comprising:
a digital processor;
wireless pairing circuitry; and
an application executing on the digital processor from a non-transitory medium;
wherein the application provides a process comprising:
receiving an alert from a paired device that an accident has occurred;
triggered by the alert pre-programmed communications to pre-programmed destinations regarding the fact of the accident having occurred.
8. The smartphone of claim 7 wherein the alert from the paired device is accompanied by GPS coordinates, which are provided along with at least some of the pre-programmed communications
9. The smartphone of claim 7 wherein the smartphone has a Global Positioning System, and GPS coordinates are provided along with at least some of the pre-programmed communications.
10. The smartphone of claim 7 wherein the alert is accompanied by specific information or data about the accident, including one or both of rollover or air bags deployed.
11. The smartphone of claim 7 wherein the pre-programmed communications include one or more of emails, text messages or recorded voice messages.
12. The smartphone of claim 7 wherein the pre-programmed communications include an alert to an Internet-connected server that an accident has occurred.
13. The smartphone of claim 7 wherein the wireless pairing circuitry is Bluetooth\u2122 protocol compatible.
14. A system comprising:
an alert apparatus having a digital processor, an accelerometer, wireless pairing circuitry, and an onboard diagnostics (OBD) connecter, the alert apparatus enabled to retrieve data from an automobile OBD system, and to determine from local and retrieved data that an accident has occurred;
a smartphone executing an application enabled to receive an alert from the alert apparatus, and to send as a result pre-programmed communications to pre-configured destinations that an accident has occurred; and
an Internet-connected server enabled to receive an alert as a pre-programmed communication from the smartphone and to send as a result pre-programmed communications to pre-configured destinations that an accident has occurred.
15. The system of claim 14 wherein the smartphone sends GPS coordinates along with pre-programmed communications.
16. The system of claim 15 wherein one or more of the pre-programmed communications comprise requests for service from one or more emergency service organizations.
17. A method comprising:
monitoring onboard diagnostics information of an automobile by an alert apparatus having an accelerometer and wireless pairing circuitry;
determining by the alert device that an accident has occurred;
sending an accident alert to a smartphone via the wireless pairing circuitry;
transmitting by the smartphone one or more preprogrammed communications to one or more pre-configured destinations as a result of receiving the alert.
18. The method of claim 17 wherein one of the pre-programmed communications is to an Internet-connected server that transmits one or more communications to one or more pre-configured destinations as a result of receiving the alert.
19. The method of claim 17 wherein the smartphone either receives GPS coordinates from the alert device, or generates the coordinates in the smartphone, and sends the GPS coordinates along with one or more of the pre-programmed communications.
20. The method of claim 19 wherein one or more of the pre-programmed communications is a request for service from one or more emergency service organizations.

1461180919-2141454b-da80-483a-abbd-2f5a6903dcad

1. A computer implemented method of determining extrinsic parameters for a vehicle vision system, the method comprising:
processing an image of a road captured by a camera to identify a first road lane marking and a second road lane marking in the image, where the first road lane marking and the second road lane marking are two road lane markings that extend parallel to each other;
determining, for at least one of the two road lane markings:
a first set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in an image plane; and
a second set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in a road plane, where the second set of parameters are determined based on information related to spacing of the two road lane markings;
identifying a linear transformation which, for at least one of the two road lane markings, defines a mapping between the first set of parameters and the second set of parameters; and
establishing the extrinsic parameters based on the identified linear transformation, where the established extrinsic parameters include an orientation of the camera with respect to a vehicle coordinate system.
2. The method of claim 1, where the identifying the linear transformation comprises determining matrix elements of a homography matrix.
3. The method of claim 2, where the matrix elements of the homography matrix are determined such that
HT{right arrow over (p\u2032)}i
approximates {right arrow over (p)}i for i=1, . . . , M, where i denotes a road lane marking identifier, M denotes a count of the two road lane markings, H denotes the homography matrix that comprises three rows and three columns, T denotes the matrix transposition, {right arrow over (p\u2032)}i denotes a 3-tuple of the first set of parameters identified for an ith road lane marking, and {right arrow over (p)}i denotes a 3-tuple of the second set of parameters identified for the ith road lane marking.
4. The method of claim 2, where the determining the first set of parameters comprises determining parameter values a\u2032i, b\u2032i and c\u2032i, such that the line along which the at least one of the two road lane markings extends in the image plane is defined by
a\u2032i\xb7x\u2032+b\u2032i\xb7y\u2032+c\u2032i=0,
where i denotes a road lane marking identifier and x\u2032 and y\u2032 denote coordinates in the image plane.
5. The method of claim 4, where the determining the second set of parameters comprises determining parameter values ai, bi and ci, such that the line along which the at least one of the two road lane markings extends in the road plane is defined by
ai\xb7x+bi\xb7y+ci=0,
where x and y denote coordinates in the road plane.
6. The method of claim 5, where the matrix elements of the homography matrix are determined based on
N
\ue8a0
H
T

\ue8a0

(
a
i
\u2032
b
i
\u2032
c
i
\u2032
)

(
a
i
b
i
c
i
)
,
for i=1, . . . , M, where N\u2022 denotes a vector norm, where i denotes a road lane marking identifier, M denotes a count of the two road lane markings, H denotes the homography matrix that comprises three rows and three columns, and T denotes a matrix transposition.
7. The method of claim 5, where the matrix elements of the homography matrix may be determined such that
\u2211

i
=
1

M

\ue89e
\uf605
H
T

\ue8a0

(
a
i
\u2032
b
i
\u2032
c
i
\u2032
)

(
a
i
b
i
c
i
)
\uf606

2
is minimized.
8. The method of claim 5, where, for at least one of the two road lane markings, one of the parameter values ai or bi is set equal to zero, and a quotient of ci and the other one of the parameter values ai or bi is set based on the information related to spacing of the two road lane markings.
9. The method of claim 8, where, for at least one of the two road lane markings, the quotient of ci and the other one of the parameter values ai or bi is set to
(2\xb7k(i)+1)\xb7l2+s,
where k(i) denotes an integer number that depends on the road lane marking identifier, l denotes a respective road lane width, and s denotes an offset from a middle longitudinal axis of the road lane that is independent of the road lane marking identifier.
10. The method of claim 1, where the processing the image includes identifying a third road lane marking that extends parallel to the first and the second road lane markings.
11. The method of claim 1, where a count M of the two road lane markings and a count Me of extrinsic parameters that are to be determined fulfil Me\u22662\xb7M.
12. The method of claim 1, where the determining the second set of parameters comprises determining a vehicle position and retrieving information on a road lane width from a map database based on the determined vehicle position.
13. The method of claim 1, where, for at least one of the two road lane markings, the line along which the road lane marking extends in the image plane is determined for multiple images of the road, and parameters defining orientations and positions for two or more of the multiple images of the road are averaged to determine the first set of parameters.
14. A calibration system for determining extrinsic parameters of a vehicle vision system comprising:
a processor;
a non-transient memory device in communication with the processor;
processor executable instructions stored in the memory device that when executed by the processor are configure to:
process data regarding a road sensed by a sensor to identify a first road lane marking and a second road lane marking, where the first road lane marking and the second road lane marking are two road lane markings that extend parallel to each other;
determine, for at least one of the two road lane markings:
a first set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in an image plane; and
a second set of parameters defining an orientation and position of a line along which the at least one of the two road lane markings extends in a road plane, where the second set of parameters are determined based on information related to spacing of the two road lane markings;
identify a linear transformation which, for at least one of the two road lane markings, defines a mapping between the first set of parameters and the second set of parameters; and
establish the extrinsic parameters based on the identified linear transformation, where the established extrinsic parameters include an orientation of the sensor with respect to a vehicle coordinate system.
15. The system of claim 14, where a homography matrix defines the linear transformation.
16. The system of claim 14, where a bilinear equation defines coordinates of the road plane.
17. The system of claim 14, where a bilinear equation defines coordinates of the image plane.
18. The system of claim 14, where the orientation of the sensor comprises a yaw angle, a pitch angle, and roll angle.
19. A computer implemented method of determining extrinsic parameters for a vehicle vision system, the method comprising:
processing an image of a road captured by a camera to identify a first road lane marking and a second road lane marking in the image, where the first road lane marking and the second road lane marking are two road lane markings that extend parallel to each other;
determining, for at least one of the two road lane markings:
a first set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in an image plane; and
a second set of parameters defining an orientation and a position of a line along which the at least one of the two road lane markings extends in a road plane, where the second set of parameters are determined based on information related to spacing of the two road lane markings;
identifying, by at least determining matrix elements of a homography matrix, a linear transformation which, for at least one of the two road lane markings, defines a mapping between the first set of parameters and the second set of parameters; and
establishing the extrinsic parameters based on the identified linear transformation, where the established extrinsic parameters include an orientation of the camera.
20. The method of claim 19,
where a first bilinear equation defines coordinates of the road plane and a second bilinear equation defines coordinates of the image plane,
where, for at least one of the two road lane markings, a first parameter of the first bilinear equation is set equal to zero, and a quotient and a second parameter of the first bilinear equation is set based on information related to spacing of the two road lane markings,
where, for at least one of the two road lane markings, the quotient and the second parameter is set to
(2\xb7k(i)+1)\xb7l2+s,
and
where k(i) denotes an integer number that depends on a road lane marking identifier, l denotes a respective road lane width, and s denotes an offset from a middle longitudinal axis of a respective road lane that is independent of the road lane marking identifier.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A method operable on a computer that is coupled to a memory containing video data of a prescribed physical area, the method responsive to the movement of objects in the prescribed physical area, the method comprising:
identifying objects in the prescribed physical area by querying the video data, wherein the objects that are identified are moving in the prescribed physical area;
specifying by a user, a direction arrow completely within the prescribed physical area, wherein the direction arrow is superimposed on a user-selected portion of an image of the prescribed physical area;
determining a number of objects that move in a direction of the specified direction arrow; and
generating a visually perceptible output of the number of objects that move in the direction of the specified direction arrow, such that a movement path of each of the respective numbers of objects is superimposed on the user-selected portion of the image of the prescribed physical area.
2. The method according to claim 1, wherein the act of generating the visually perceptible output of the number of objects further comprises:
generating still image of the prescribed physical area; and
superimposing a representation of the number of objects over the still image.
3. The method according to claim 2, wherein the act of superimposing the representation of the number of objects over the still image further comprises:
plotting the movements of the number of objects that moved in the direction of the specified direction arrow in the prescribed physical area.
4. The method according to claim 1, further comprising:
specifying a virtual line in the prescribed physical area,
wherein the specified direction arrow is specified with respect to the virtual line.
5. The method according to claim 1, further comprising:
defining a region of interest in the prescribed physical area,
wherein only objects that move in the specified direction arrow in the region of interest are determined.
6. A method operable on a computer to detect and analyze movement of objects in a prescribed physical area, the method comprising:
receiving video data of the prescribed physical area from at least one camera;
detecting movement of objects in the prescribed physical area;
generating video metadata in response to the detection act, the video metadata representing the detected objects and their movement in the prescribed physical area;
storing the video metadata in a database;
specifying by a user, a direction arrow completely within the prescribed physical area, wherein the direction arrow is superimposed on a user-selected portion of an image of the prescribed physical area;
determining a number of objects that move in a direction of the specified direction arrow; and
generating a visually perceptible output of the number of objects, objects that move in the direction of the specified direction arrow, such that a movement path of each of the respective numbers of objects is superimposed on the user-selected portion of the image of the prescribed physical area.
7. A system for detecting and analyzing movement of objects in a prescribed physical area, the system comprising:
an interface that receives video data of the prescribed physical area;
a video management system coupled to the interface, the video management system storing the video data;
a video analytics engine coupled to the video management system, the video analytics engine generating video metadata representing at least one object and its movement in the prescribed physical area;
an object movement analytics engine coupled to the video analytics engine, the object movement analytics engine determining a number of objects that move in a direction of a specified direction arrow completely within the prescribed physical area, wherein the direction arrow is superimposed by a user on a user-selected portion of an image of the prescribed physical area; and
a display coupled to the object movement analytics engine, the display displaying a visually perceptible output of the number of objects, objects that move in the direction of the specified direction arrow, such that a movement path of each of the respective numbers of objects is superimposed on the user-selected portion of the image of the prescribed physical area.
8. The system of claim 7, wherein the interface receives video data from a plurality of cameras corresponding to a plurality of prescribed physical areas.