1460707183-988cabed-929e-4d7c-bef2-a1fc87e0867f

1. A seed of hybrid maize variety X00A016, produced by crossing a first plant of variety PHW6G with a second plant of variety PHEDR, wherein representative seed of said varieties PHW6G and PHEDR have been deposited under ATCC Accession Number PTA-11991 and PTA-7754, respectively.
2. A plant or plant part produced by growing the seed of the hybrid maize variety of claim 1.
3. A method for producing a second maize plant comprising applying plant breeding techniques to a first maize plant, or parts thereof, wherein said first maize plant is the maize plant of claim 2, and wherein application of said techniques results in the production of said second maize plant.
4. The method of claim 3, further defined as producing an inbred maize plant derived from hybrid maize variety X00A016, the method comprising the steps of:
(a) crossing said first maize plant with itself or another maize plant to produce seed of a subsequent generation;
(b) harvesting and planting the seed of the subsequent generation to produce at least one plant of the subsequent generation; and
(c) repeating steps (a) and (b) for an additional 2-10 generations to produce an inbred maize plant derived from hybrid maize variety X00A016.
5. The method of claim 3, further defined as producing an inbred maize plant derived from hybrid maize variety X00A016, the method comprising the steps of:
(a) crossing said first maize plant with an inducer variety to produce haploid seed; and
(b) doubling the haploid seed to produce an inbred maize plant derived from hybrid maize variety X00A016.
6. A seed of hybrid maize variety X00A016 further comprising a locus conversion, wherein said seed is produced by crossing a first plant of variety PHW6G with a second plant of variety PHEDR; wherein representative seed of said varieties PHW6G and PHEDR have been deposited under ATCC Accession Number PTA-11991 and PTA-7754, respectively; and wherein at least one of said varieties PHW6G and PHEDR further comprises a locus conversion.
7. The seed of claim 6, wherein the locus conversion confers a trait selected from the group consisting of male sterility, site-specific recombination, abiotic stress tolerance, altered phosphorus, altered antioxidants, altered fatty acids, altered essential amino acids, altered carbohydrates, herbicide tolerance, insect resistance and disease resistance.
8. A plant or plant part produced by growing the seed of the maize hybrid variety of claim 6.
9. A method for producing a second maize plant comprising applying plant breeding techniques to a first maize plant, or parts thereof, wherein said first maize plant is the maize plant of claim 8, and wherein application of said techniques results in the production of said second maize plant.
10. The method of claim 9, further defined as producing an inbred maize plant derived from hybrid maize variety X00A016, the method comprising the steps of:
(a) crossing said first maize plant with itself or another maize plant to produce seed of a subsequent generation;
(b) harvesting and planting the seed of the subsequent generation to produce at least one plant of the subsequent generation; and
(c) repeating steps (a) and (b) for an additional 2-10 generations to produce an inbred maize plant derived from hybrid maize variety X00A016.
11. The method of claim 9, further defined as producing an inbred maize plant derived from hybrid maize variety X00A016, the method comprising the steps of:
(a) crossing said first maize plant with an inducer variety to produce haploid seed; and
(b) doubling the haploid seed to produce an inbred maize plant derived from hybrid maize variety X00A016.
The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A device, comprising:
a housing configured with a working element;
a motor configured for urging motion of the working element;
a power control module in electrical control of said motor, the power control module in electrical connection with a first power source and a second power source, the first power source being a battery assembly having a Direct Current (DC) power output, and the second power source being a power inverter receiving Alternating Current (AC) power from an AC power input line, the power control module having a power selection switch for selectively connecting at least one of the first power source and the second power source with the motor to cause said motor to urge motion of the working element;
a charge switch for selecting between a first mode for charging the battery assembly when the power control module is receiving AC power and a second mode for not charging the battery; and
a charge controller for controlling battery assembly charging when the first mode of the charge switch is selected,
wherein the charge controller comprises logic embedded in the power control module.
2. The device as claimed in claim 1, wherein the power inverter is a rectifier and filter combination.
3. The device as claimed in claim 1, wherein the power inverter includes a step down controller.
4. The device as claimed in claim 2, wherein the step down controller includes a voltage rectifier and a pulse width modulator.
5. The device as claimed in claim 1, further comprising a current sensor for sensing current level of the battery assembly, wherein the charge controller analyzes current level changes obtained by the current sensor and adjusts a configurable duty cycle.
6. The device as claimed in claim 1, further comprising a timer logic with a predetermined maximum charge time, wherein the charging controller automatically sets to a maintenance mode when the predetermined maximum charge time have passed.
7. The device as claimed in claim 6, wherein the predetermined maximum charge time is 12.5 hours.
8. A system, comprising:
a housing configured with a working element;
a motor configured for urging motion of the working element;
a power control module in electrical control of said motor, the power control module in electrical connection with a first power source and a second power source, the first power source being a battery assembly having a Direct Current (DC) power output, and the second power source being a power inverter receiving Alternating Current (AC) power from an AC power input line, the power control module having a power selection switch for selectively connecting at least one of the first power source and the second power source with the motor to cause said motor to urge motion of the working element;
a charge switch for selecting between a first mode for charging the battery assembly when the power control module is receiving AC power and a second mode for not charging the battery; and
means for controlling charging of the battery assembly when the first mode of the charge switch is selected,
wherein the controlling means comprises logic embedded in the power control module.
9. The system as claimed in claim 8, wherein the power inverter is a rectifier and filter combination.
10. The system as claimed in claim 8, wherein the power inverter includes a step down controller comprising a voltage rectifier and a pulse width modulator.
11. The system as claimed in claim 8, further comprising a current sensor for sensing current level of the battery assembly, wherein the controlling means analyzes current level changes obtained by the current sensor and adjusts a configurable duty cycle.
12. The system as claimed in claim 8, further comprising a timer logic with a predetermined maximum charge time, wherein the controlling means automatically sets to a maintenance mode when the predetermined maximum charge time have passed.
13. The system as claimed in claim 12, wherein the predetermined maximum charge time is 12.5 hours.

1460707180-7c05e4df-48d4-4f22-95cc-efce9300a53a

1. A method comprising:
receiving a current frame comprising a frame sync segment and a plurality of data segments, wherein the current frame contains a current map indicating a location of at least first and second differently coded data in a first frame, a next map indicating a location of at least first and second differently coded data in a second frame, and a count indicating the number of frames until the next map becomes the current map; and,
processing the at least first and second differently coded data in the first frame in response to the current map.
2. The method of claim 1 wherein the current map, the next map, and the count are contained in the same segment of the current frame.
3. The method of claim 2 wherein the segment containing the current map, the next map, and the count comprises a data segment of the current frame.
4. The method of claim 1 further comprising:
maintaining a count related to when the next map will change to the current map; and,
counting down from the count based on frame times.
5. The method of claim 4 wherein the current map, the next map, and the count are contained in the same segment of the current frame.
6. The method of claim 5 wherein the segment containing the current map, the next map, and the count comprises a data segment of the current frame.
7. The method of claim 1 wherein the first frame is the current frame, and wherein the second frame is a future frame.
8. A method of transmitting a current frame having a frame sync segment and a plurality of data segments comprising:
inserting a current map, a next map, and a count into the current frame, wherein the current map indicates a location of at least first and second differently coded data in a first frame, wherein the next map indicates a location of at least first and second differently coded data in a second frame, and wherein the count indicates the number of frames until the next map becomes the current map; and,
transmitting the current frame.
9. The method of claim 8 wherein the current map, the next map, and the count are contained in the same segment of the current frame.
10. The method of claim 9 wherein the segment containing the current map, the next map, and the count comprises a data segment of the current frame.
11. The method of claim 8 wherein the first frame is the current frame, and wherein the second frame is a future frame.
12. A method comprising:
receiving a frame comprising first and second fields each having a frame sync segment and a plurality of data segments, wherein the first field contains a current map and count information, wherein the second field contains a next map and count information, wherein the current map indicates location of at least first and second differently coded data in a current frame, wherein the next map indicates location of at least first and second differently coded data in a future frame, and wherein the count information indicates the number of frames until the next map becomes the current map; and,
processing the at least first and second differently coded data in the current frame in response to the current map.
13. The method of claim 12 wherein the current map and count information are contained in the same segment of the first field, and wherein the next map and count information are contained in the same segment of the second field.
14. The method of claim 13 wherein the segment containing the current map and count information comprises a data segment, and wherein the segment containing the next map and count information comprises a data segment.
15. The method of claim 12 further comprising:
maintaining a count related to when the next map will change to the current map; and,
counting down from the count based on frame times.
16. The method of claim 15 wherein the current map and count information are contained in the same segment of the first field, and wherein the next map and count information are contained in the same segment of the second field.
17. The method of claim 16 wherein the segment containing the current map and count information comprises a data segment, and wherein the segment containing the next map and count information comprises a data segment.
18. The method of claim 12 wherein the current map further indicates a coding rate for at least one of the first and second differently coded data in the current frame, and wherein the next map further indicates a coding rate for at least one of the first and second differently coded data in the future frame.
19. The method of claim 12 wherein the current map further indicates at least first and second coding rates corresponding to the at least first and second differently coded data in the current frame, and wherein the next map further indicates at least first and second coding rates corresponding to the at least first and second different coded data in the future frame.
20. A method of transmitting a frame having first and second fields each having a frame sync segment and a plurality of data segments comprising:
inserting a current map and count information into the first field, wherein the current map indicates location of at least first and second differently coded data in a current frame;
inserting a next map and count information into the second field, wherein the next map indicates location of at least first and second differently coded data in a future frame, and wherein the count information indicates the number of frames until the next map becomes the current map; and,
transmitting the first and second fields of the frame.
21. The method of claim 20 wherein the current map and count information are contained in the same segment of the first field, and wherein the next map and count information are contained in the same segment of the second field.
22. The method of claim 20 wherein the segment containing the current map and count information comprises a data segment of the first field, and wherein the segment containing the next map and count information comprises a data segment of the second field.
23. The method of claim 20 wherein the current map further indicates a coding rate for at least one of the first and second differently coded data in the current frame, and wherein the next map further indicates a coding rate for at least one of the first and second differently coded data in the future frame.
24. The method of claim 20 wherein the current map further indicates at least first and second coding rates corresponding to the at least first and second differently coded data in the current frame, and wherein the next map further indicates at least first and second coding rates corresponding to the at least first and second differently coded data in the future frame.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A method for dispatching a software thread to a logical processor in a shared processor logical partition, the method comprising:
responsive to a software thread being made ready to run, determining, by an operating system running in a shared processor logical partition in a logical partitioned data processing system, whether cache affinity is not likely to provide a performance benefit, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has been dispatched and undispatched a predetermined number of times since the last time the software thread was undispatched;
responsive to a determination that cache affinity is not likely to provide a performance benefit, selecting, by the operating system, a logical processor based on other criteria; and
queuing, by the operating system, the software thread to run on the selected logical processor.
2. The method of claim 1, further comprising:
responsive to a determination that cache affinity is likely to provide a performance benefit, identifying a logical processor on which the software thread was last dispatched; and
queuing the software thread to run on the logical processor on which the software thread was last dispatched.
3. The method of claim 1, wherein the operating system determines that cache affinity is not likely to provide a performance benefit
responsive to determining that the software thread has been undispatched for a predetermined period of time.
4. The method of claim 1, wherein the operating system determines that cache affinity is not likely to provide a performance benefit
responsive to determining that a logical processor on which the software thread was last dispatched has not been undispatched since the last time the software thread was dispatched.
5. The method of claim 1, wherein the operating system determines that cache affinity is not likely to provide a performance benefit
responsive to determining that a number of non-affinity dispatches of a logical processor on which the software thread was last dispatched is non-zero.
6. The method of claim 1, further comprising:
collecting, by the operating system, logical partition metrics for each logical processor by monitoring a virtual processor area provided for each virtual processor for communication with the operating system.
7. An apparatus for dispatching a software thread to a logical processor in a shared processor logical partition, the apparatus comprising:
a plurality of processors;
a server firmware that dispatches the one or more of the plurality of processors to one or more shared processor logical partitions; and
an operating system running in a shared processor logical partition within the one or more shared processor logical partitions,
wherein responsive to a software thread being made ready to run, the operating system determines whether cache affinity is not likely to provide a performance benefit, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has been dispatched and undispatched predetermined number of times since the last time the software thread was undispatched;
wherein responsive to a determination that cache affinity is not likely to provide a performance benefit, the operating system selects a logical processor based on other criteria; and
wherein the operating system queues the software thread to run on the selected logical processor.
8. The apparatus of claim 7, wherein responsive to a determination that cache affinity is likely to provide a performance benefit, the operating system identifies a logical processor on which the software thread was last dispatched and queues the software thread to run on the logical processor on which the software thread was last dispatched.
9. The apparatus of claim 7, wherein the server firmware provides a virtual processor area for each virtual processor for communication with the operating system;
wherein the operating system collects logical partition metrics for each logical processor by monitoring the virtual processor area.
10. The apparatus of claim 7, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that the software thread has been undispatched for a predetermined period of time.
11. The apparatus of claim 7, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has not been undispatched since the last time the software thread was dispatched.
12. The apparatus of claim 7, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a number of non-affinity dispatches of a logical processor on which the software thread was last dispatched is non-zero.
13. A computer program product comprising a computer storage medium having a computer readable program stored thereon, wherein the computer readable program, when executed on a computing device, causes the computing device to:
responsive to a software thread being made ready to run, determine, by an operating system running in a shared processor logical partition in a logical partitioned data processing system, whether cache affinity is not likely to provide a performance benefit, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has been dispatched and undispatched a predetermined number of times since the last time the software thread was undispatched;
select, by the operating system, a logical processor based on other criteria responsive to a determination that cache affinity is not likely to provide a performance benefit; and
queue, by the operating system, the software thread to run on the selected logical processor.
14. The computer program product of claim 13, wherein the computer readable program further causes the computing device to, responsive to a determination that cache affinity is likely to provide a performance benefit, identify a logical processor on which the software thread was last dispatched; and queue the software thread to run on the logical processor on which the software thread was last dispatched.
15. The computer program product of claim 13, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that the software thread has been undispatched for a predetermined period of time.
16. The computer program product of claim 13, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has not been undispatched since the last time the software thread was dispatched.
17. The computer program product of claim 13, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a number of non-affinity dispatches of a logical processor on which the software thread was last dispatched is non-zero.
18. The computer program product of claim 13, wherein the computer readable program, when executed on a computing device, causes the computing device to:
collect logical partition metrics for each logical processor by monitoring a virtual processor area provided for each virtual processor for communication with the operating system.