1. A method comprising:
receiving a current frame comprising a frame sync segment and a plurality of data segments, wherein the current frame contains a current map indicating a location of at least first and second differently coded data in a first frame, a next map indicating a location of at least first and second differently coded data in a second frame, and a count indicating the number of frames until the next map becomes the current map; and,
processing the at least first and second differently coded data in the first frame in response to the current map.
2. The method of claim 1 wherein the current map, the next map, and the count are contained in the same segment of the current frame.
3. The method of claim 2 wherein the segment containing the current map, the next map, and the count comprises a data segment of the current frame.
4. The method of claim 1 further comprising:
maintaining a count related to when the next map will change to the current map; and,
counting down from the count based on frame times.
5. The method of claim 4 wherein the current map, the next map, and the count are contained in the same segment of the current frame.
6. The method of claim 5 wherein the segment containing the current map, the next map, and the count comprises a data segment of the current frame.
7. The method of claim 1 wherein the first frame is the current frame, and wherein the second frame is a future frame.
8. A method of transmitting a current frame having a frame sync segment and a plurality of data segments comprising:
inserting a current map, a next map, and a count into the current frame, wherein the current map indicates a location of at least first and second differently coded data in a first frame, wherein the next map indicates a location of at least first and second differently coded data in a second frame, and wherein the count indicates the number of frames until the next map becomes the current map; and,
transmitting the current frame.
9. The method of claim 8 wherein the current map, the next map, and the count are contained in the same segment of the current frame.
10. The method of claim 9 wherein the segment containing the current map, the next map, and the count comprises a data segment of the current frame.
11. The method of claim 8 wherein the first frame is the current frame, and wherein the second frame is a future frame.
12. A method comprising:
receiving a frame comprising first and second fields each having a frame sync segment and a plurality of data segments, wherein the first field contains a current map and count information, wherein the second field contains a next map and count information, wherein the current map indicates location of at least first and second differently coded data in a current frame, wherein the next map indicates location of at least first and second differently coded data in a future frame, and wherein the count information indicates the number of frames until the next map becomes the current map; and,
processing the at least first and second differently coded data in the current frame in response to the current map.
13. The method of claim 12 wherein the current map and count information are contained in the same segment of the first field, and wherein the next map and count information are contained in the same segment of the second field.
14. The method of claim 13 wherein the segment containing the current map and count information comprises a data segment, and wherein the segment containing the next map and count information comprises a data segment.
15. The method of claim 12 further comprising:
maintaining a count related to when the next map will change to the current map; and,
counting down from the count based on frame times.
16. The method of claim 15 wherein the current map and count information are contained in the same segment of the first field, and wherein the next map and count information are contained in the same segment of the second field.
17. The method of claim 16 wherein the segment containing the current map and count information comprises a data segment, and wherein the segment containing the next map and count information comprises a data segment.
18. The method of claim 12 wherein the current map further indicates a coding rate for at least one of the first and second differently coded data in the current frame, and wherein the next map further indicates a coding rate for at least one of the first and second differently coded data in the future frame.
19. The method of claim 12 wherein the current map further indicates at least first and second coding rates corresponding to the at least first and second differently coded data in the current frame, and wherein the next map further indicates at least first and second coding rates corresponding to the at least first and second different coded data in the future frame.
20. A method of transmitting a frame having first and second fields each having a frame sync segment and a plurality of data segments comprising:
inserting a current map and count information into the first field, wherein the current map indicates location of at least first and second differently coded data in a current frame;
inserting a next map and count information into the second field, wherein the next map indicates location of at least first and second differently coded data in a future frame, and wherein the count information indicates the number of frames until the next map becomes the current map; and,
transmitting the first and second fields of the frame.
21. The method of claim 20 wherein the current map and count information are contained in the same segment of the first field, and wherein the next map and count information are contained in the same segment of the second field.
22. The method of claim 20 wherein the segment containing the current map and count information comprises a data segment of the first field, and wherein the segment containing the next map and count information comprises a data segment of the second field.
23. The method of claim 20 wherein the current map further indicates a coding rate for at least one of the first and second differently coded data in the current frame, and wherein the next map further indicates a coding rate for at least one of the first and second differently coded data in the future frame.
24. The method of claim 20 wherein the current map further indicates at least first and second coding rates corresponding to the at least first and second differently coded data in the current frame, and wherein the next map further indicates at least first and second coding rates corresponding to the at least first and second differently coded data in the future frame.
The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.
1. A method for dispatching a software thread to a logical processor in a shared processor logical partition, the method comprising:
responsive to a software thread being made ready to run, determining, by an operating system running in a shared processor logical partition in a logical partitioned data processing system, whether cache affinity is not likely to provide a performance benefit, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has been dispatched and undispatched a predetermined number of times since the last time the software thread was undispatched;
responsive to a determination that cache affinity is not likely to provide a performance benefit, selecting, by the operating system, a logical processor based on other criteria; and
queuing, by the operating system, the software thread to run on the selected logical processor.
2. The method of claim 1, further comprising:
responsive to a determination that cache affinity is likely to provide a performance benefit, identifying a logical processor on which the software thread was last dispatched; and
queuing the software thread to run on the logical processor on which the software thread was last dispatched.
3. The method of claim 1, wherein the operating system determines that cache affinity is not likely to provide a performance benefit
responsive to determining that the software thread has been undispatched for a predetermined period of time.
4. The method of claim 1, wherein the operating system determines that cache affinity is not likely to provide a performance benefit
responsive to determining that a logical processor on which the software thread was last dispatched has not been undispatched since the last time the software thread was dispatched.
5. The method of claim 1, wherein the operating system determines that cache affinity is not likely to provide a performance benefit
responsive to determining that a number of non-affinity dispatches of a logical processor on which the software thread was last dispatched is non-zero.
6. The method of claim 1, further comprising:
collecting, by the operating system, logical partition metrics for each logical processor by monitoring a virtual processor area provided for each virtual processor for communication with the operating system.
7. An apparatus for dispatching a software thread to a logical processor in a shared processor logical partition, the apparatus comprising:
a plurality of processors;
a server firmware that dispatches the one or more of the plurality of processors to one or more shared processor logical partitions; and
an operating system running in a shared processor logical partition within the one or more shared processor logical partitions,
wherein responsive to a software thread being made ready to run, the operating system determines whether cache affinity is not likely to provide a performance benefit, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has been dispatched and undispatched predetermined number of times since the last time the software thread was undispatched;
wherein responsive to a determination that cache affinity is not likely to provide a performance benefit, the operating system selects a logical processor based on other criteria; and
wherein the operating system queues the software thread to run on the selected logical processor.
8. The apparatus of claim 7, wherein responsive to a determination that cache affinity is likely to provide a performance benefit, the operating system identifies a logical processor on which the software thread was last dispatched and queues the software thread to run on the logical processor on which the software thread was last dispatched.
9. The apparatus of claim 7, wherein the server firmware provides a virtual processor area for each virtual processor for communication with the operating system;
wherein the operating system collects logical partition metrics for each logical processor by monitoring the virtual processor area.
10. The apparatus of claim 7, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that the software thread has been undispatched for a predetermined period of time.
11. The apparatus of claim 7, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has not been undispatched since the last time the software thread was dispatched.
12. The apparatus of claim 7, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a number of non-affinity dispatches of a logical processor on which the software thread was last dispatched is non-zero.
13. A computer program product comprising a computer storage medium having a computer readable program stored thereon, wherein the computer readable program, when executed on a computing device, causes the computing device to:
responsive to a software thread being made ready to run, determine, by an operating system running in a shared processor logical partition in a logical partitioned data processing system, whether cache affinity is not likely to provide a performance benefit, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has been dispatched and undispatched a predetermined number of times since the last time the software thread was undispatched;
select, by the operating system, a logical processor based on other criteria responsive to a determination that cache affinity is not likely to provide a performance benefit; and
queue, by the operating system, the software thread to run on the selected logical processor.
14. The computer program product of claim 13, wherein the computer readable program further causes the computing device to, responsive to a determination that cache affinity is likely to provide a performance benefit, identify a logical processor on which the software thread was last dispatched; and queue the software thread to run on the logical processor on which the software thread was last dispatched.
15. The computer program product of claim 13, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that the software thread has been undispatched for a predetermined period of time.
16. The computer program product of claim 13, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a logical processor on which the software thread was last dispatched has not been undispatched since the last time the software thread was dispatched.
17. The computer program product of claim 13, wherein the operating system determines that cache affinity is not likely to provide a performance benefit responsive to determining that a number of non-affinity dispatches of a logical processor on which the software thread was last dispatched is non-zero.
18. The computer program product of claim 13, wherein the computer readable program, when executed on a computing device, causes the computing device to:
collect logical partition metrics for each logical processor by monitoring a virtual processor area provided for each virtual processor for communication with the operating system.