1. An automated data storage library, comprising:
a plurality of storage shelves for storing data storage media;
at least one data storage drive for reading andor writing data with respect to said data storage media;
at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive;
a network; and
a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, each of said plurality of processor nodes comprising at least one processor and at least one interface to said network;
at least one of said plurality of processor nodes:
upon detection of a node address failure of said processor node for said network, disables said processor node from said network at said at least one interface.
2. The automated data storage library of claim 1, wherein said at least one processor node:
additionally comprises a nonvolatile memory;
maintains an alternate node address of said processor node in said nonvolatile memory;
detects said node address failure, by attempting to determine its own node address; and
upon failing to determine any usable node address as its own, selects said alternate node address in said nonvolatile memory.
3. The automated data storage library of claim 1, wherein said at least one processor node detects said node address failure, by sensing said network at said processor node interface.
4. An automated data storage library, comprising:
a plurality of storage shelves for storing data storage media;
at least one data storage drive for reading andor writing data with respect to said data storage media;
at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive;
a network; and
a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, each of said plurality of processor nodes comprising at least one processor and at least one interface to said network;
at least one of said plurality of processor nodes:
determines a nominal node address as its own;
senses node addresses of other processor nodes of said network;
compares said sensed node addresses of other processor nodes with said nominal node address;
determines existence of any conflict between at least one of said sensed node addresses of other processor nodes with said nominal node address, said existing conflict comprising a node address failure of said processor node for said network; and
upon detection of said node address failure of said processor node for said network, disables said processor node nominal node address from said network at said at least one interface.
5. The automated data storage library of claim 4, wherein said at least one processor node:
upon detection of said node address failure of said processor node for said network, additionally selects a node address that avoids said node address failure, said selected node address for addressing said processor node in said network at said at least one interface upon enabling said processor node in said network.
6. The automated data storage library of claim 5, wherein said at least one processor node:
additionally comprises a nonvolatile memory;
maintains an alternate node address of said processor node in said nonvolatile memory;
upon said determination of a conflict and detection of said node address failure of said processor node for said network, further compares said alternate node address of said nonvolatile memory with said sensed node addresses of said other processor nodes; and
if said alternate node address avoids conflict with said sensed node addresses of said other processor nodes, selects said node address that avoids said node address failure, by selecting said alternate node address of said nonvolatile memory.
7. An automated data storage library, comprising:
a plurality of storage shelves for storing data storage media;
at least one data storage drive for reading andor writing data with respect to said data storage media;
at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive;
a network; and
a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, at least two of said plurality of processor nodes subject to reset, and comprising at least one processor, at least one interface to said network, and a timer, said timer maintaining an indication of time since said processor node has been reset;
(I) at least one of said plurality of processor nodes:
determines a nominal node address as its own;
senses node addresses of other processor nodes of said network;
compares said sensed node addresses of other processor nodes with said nominal node address;
determines existence of any conflict between at least one of said sensed node addresses of other processor nodes with said nominal node address, said existing conflict comprising a node address failure in said network; and
(II) said processor node having said conflicting sensed node address:
compares the time of said timer of said processor node to the time of said timer of said other processor node having said conflicting said sensed node address, to determine the processor node having the more recent time of said timers; and
if said processor node has said more recent time,
disables said processor node nominal node address from said network at said at least one interface.
8. The automated data storage library of claim 7, wherein said processor node having said more recent time:
additionally selects a node address that avoids said node address failure, said selected node address for addressing said processor node having said more recent time, in said network, upon enabling said processor node in said network.
9. The automated data storage library of claim 8, wherein said processor node having said more recent time:
additionally comprises a nonvolatile memory;
maintains an alternate node address of said processor node in said nonvolatile memory;
further compares said alternate node address of said nonvolatile memory with said sensed node addresses of said other processor nodes; and
if said alternate node address avoids conflict with said sensed node addresses of said other processor nodes, selects said node address that avoids said node address failure, by selecting said alternate node address of said nonvolatile memory.
10. A method for handling addressing failure of a network of an automated data storage library, said automated data storage library for accessing data storage media, said automated data storage library comprising a plurality of storage shelves for storing data storage media; at least one data storage drive for reading andor writing data with respect to said data storage media; at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive; a network; and a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, each of said plurality of processor nodes comprising at least one processor and at least one interface to said network; said method comprising the steps of:
detecting a node address failure of one of said plurality of processor nodes for said network; and
upon detecting said node address failure, disabling said processor node having said node address failure from said network at said at least one interface.
11. The method of claim 10, wherein said at least one processor node additionally comprises a nonvolatile memory; said method additionally comprising the steps of:
maintaining an alternate node address of said processor node in said nonvolatile memory;
detecting said node address failure, by attempting to determine a node address for said processor node; and
upon failing to determine any usable node address for said processor node, selecting said alternate node address in said nonvolatile memory for said processor node having said node address failure.
12. The method of claim 10, wherein said step of detecting said node address failure, comprises said at least one processor node sensing said network at said processor node interface.
13. A method for handling addressing failure of a network of an automated data storage library, said automated data storage library for accessing data storage media, said automated data storage library comprising a plurality of storage shelves for storing data storage media; at least one data storage drive for reading andor writing data with respect to said data storage media; at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive; a network; and a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, each of said plurality of processor nodes comprising at least one processor and at least one interface to said network; said method comprising the steps of:
determining a nominal node address of at least one of said plurality of processor nodes;
sensing node addresses of other processor nodes of said network;
comparing said sensed node addresses of other processor nodes with said nominal node address;
determining existence of any conflict between at least one of said sensed node addresses of other processor nodes with said nominal node address, said existing conflict comprising a node address failure of said processor node for said network; and
upon detection of said node address failure of said processor node for said network, disabling said processor node nominal node address having said node address failure from said network at said at least one interface.
14. The method of claim 13, additionally comprising the step of:
upon detection of said node address failure of said processor node for said network, selecting a node address that avoids said node address failure, said selected node address for addressing said processor node having said node address failure, in said network, upon enabling said processor node in said network.
15. The method of claim 14, wherein said at least one processor node additionally comprises a nonvolatile memory; said method additionally comprising the steps of:
maintaining an alternate node address of said at least one processor node in said nonvolatile memory;
upon said determination of a conflict and detection of said node address failure of said at least one processor node for said network, further comparing said alternate node address of said nonvolatile memory with said sensed node addresses of said other processor nodes; and
if said alternate node address avoids conflict with said sensed node addresses of said other processor nodes, selecting said node address that avoids said node address failure, by selecting said alternate node address of said nonvolatile memory.
16. A method for handling addressing failure of a network of an automated data storage library, said automated data storage library for accessing data storage media, said automated data storage library comprising a plurality of storage shelves for storing data storage media; at least one data storage drive for reading andor writing data with respect to said data storage media; at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive; a network; and a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, at least two of said plurality of processor nodes subject to reset, and comprising at least one processor, at least one interface to said network, and a timer, said timer maintaining an indication of time since said processor node has been reset; said method comprising the steps of:
determining a nominal node address of at least one of said plurality of processor nodes;
sensing node addresses of other processor nodes of said network;
comparing said sensed node addresses of other processor nodes with said nominal node address of said at least one processor node;
determining existence of any conflict between at least one of said sensed node addresses of other processor nodes with said nominal node address of said at least one processor node, said existing conflict comprising a node address failure in said network;
comparing the time of said timer of said processor node having said nominal node address to the time of said timer of said other processor node having said conflicting said sensed node address, to determine the processor node having the more recent time of said timers; and
if said processor node having said nominal node address has said more recent time, disabling said processor node nominal node address from said network at said at least one interface.
17. The method of claim 16, additionally comprising the step of:
selecting a node address for said processor node having said more recent time, that avoids said node address failure, said selected node address for addressing said processor node having said more recent time, in said network, upon enabling said processor node in said network.
18. The method of claim 17, wherein said automated data storage library additionally comprises a nonvolatile memory; and wherein said method additionally comprising the steps of:
maintaining an alternate node address in said nonvolatile memory;
further comparing said alternate node address of said nonvolatile memory for said processor node having said more recent time with said sensed node addresses of said other processor nodes; and
if said alternate node address avoids conflict with said sensed node addresses of said other processor nodes, selecting said node address that avoids said node address failure, by selecting said alternate node address of said nonvolatile memory.
19. An automated data storage library, comprising:
a plurality of storage shelves for storing data storage media;
at least one data storage drive for reading andor writing data with respect to said data storage media;
at least one robot accessor for transporting said data storage media between said plurality of storage shelves and said at least one data storage drive;
a network; and
a plurality of processor nodes for operating said automated data storage library, said plurality of processor nodes comprising nodes of said network, each of said plurality of processor nodes comprising at least one processor and at least one interface to said network;
at least one of said plurality of processor nodes associated with at least one element of said library, said processor node:
maintains designating information of said at least one element associated with said processor node;
determines a nominal node address as its own for said network;
senses present designating information of at least one element associated with said processor node;
compares said present designating information to said maintained designating information;
determines whether a match is made between said compared said present designating information and said maintained designating information, a failure of said match comprising a node address failure of said processor node for said network; and
upon said node address failure of said processor node for said network, disables said processor node nominal node address from said network at said at least one interface.
The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.
1. A method, comprising:
storing backup data at a computer device, wherein the backup data is from a first of a plurality of backup sessions and from a first of a plurality of backup sources located in separate computer systems, wherein the backup data is stored with a first label identifying the backup data as corresponding to the first backup session and to the first backup source;
deleting stored data at the computing device, wherein the deleting causes the backup data to be stored non-contiguously;
rearranging stored data at the computing device, wherein the rearranging includes:
rearranging, based on the first label, the backup data from the first backup session from being stored non-contiguously to being contiguously stored within a storage container associated with the first backup source; and
rearranging, based on a second label, backup data from a second backup session from being stored non-contiguously to being contiguously stored within the storage container in response to the second label identifying the backup data from the second backup session as corresponding to the first backup source.
2. The method of claim 1, wherein the deleting causes segments of the backup data from the first backup session to be stored in an order that is different from a sequential order in which the segments were received from the first backup source; and
wherein the rearranging the stored data includes rearranging the segments in a sequential order corresponding to the order that the segments were received.
3. The method of claim 1, further comprising:
receiving a request to restore data from the first backup session; and
in response to the request, executing a plurality of threads in parallel, wherein each thread is executable to read a respective segment from the first backup session.
4. The method of claim 1, further comprising:
determining that the backup data from the first backup session corresponds to the first backup source based on information received with the backup data from the first backup session.
5. The method of claim 1, wherein the storage container stores backup data from a plurality of backup sessions of the first backup source and backup data from one or more backup sessions of a second backup source that is different from the first backup source; and
wherein the rearranging the stored data includes:
storing the backup data from the plurality of backup sessions contiguously within the storage container; and
storing backup data from the one or more backup sessions contiguously within the storage container.
6. A computing device, comprising:
a processor;
memory having stored thereon instructions executable by the processor to cause the computing device to perform operations comprising:
removing segments of backup data from a storage system, wherein the backup data includes backup data from a first backup session of a first backup source, wherein the backup data includes backup data from a second backup session of a second backup source, wherein the removing causes the backup data to be stored non-contiguously at the storage system; and
compacting backup data stored at the storage system, wherein the compacting includes:
rearranging, based on a first label, the backup data from the first backup session such that the backup data from the first backup session is stored contiguously within a first storage container associated with the first backup source, wherein the first label identifies the backup data from the first backup session as being from the first backup source; and
rearranging, based on a second label, the backup data from the second backup session such that the backup data from the second backup session is stored contiguously within a second storage container associated with the second backup source, wherein the second label identifies the backup data from the second backup session as being from the second backup source.
7. The computing device of claim 6, further comprising:
the storage system that includes the first and second storage containers.
8. The computing device of claim 6, wherein the compacting further includes storing the backup data of the first backup session in a sequential order in which the backup data of the first backup session is received at the storage system.
9. The computing device of claim 6, wherein the removing includes relocating one or more segments of the backup data of the first backup session.
10. The computing device of claim 6,
wherein the backup data from the storage system includes backup data from a third backup session of the first backup source; and
wherein the compacting the data further includes rearranging the backup data from the third backup session such that the backup data from the third backup session is stored contiguously within the first storage container.
11. The computing device of claim 10, wherein the first backup source corresponds to a first remote device included in the storage system, and wherein the second backup source corresponds to a second remote device included in the storage system, wherein the second device is different from the first remote device.
12. The computing device of claim 10,
wherein the first backup source corresponds to a first backup configuration at a first device in the storage system; and
wherein the second backup source corresponds to a second backup configuration at the first device, the second backup configuration being different from the first backup configuration.
13. A non-transitory computer-readable storage medium having stored thereon instructions that responsive to execution by a computing device cause the computing device to perform operations comprising:
storing backup data from a first backup session of a first of a plurality of backup sources associated with different computing systems, wherein the backup data is stored within a first label that identifies the backup data as being from the first backup session and being from the first backup source;
relocating segments of the backup data, wherein the relocating causes the backup data to be stored non-contiguously in a first storage container associated with the first backup source; and
compacting data stored at the first storage container, wherein the compacting includes using the first label to rearrange the data from being stored non-contiguously to being stored as a contiguous group within the first storage container.
14. The non-transitory computer-readable storage medium of claim 13, wherein the compacting further includes storing the backup data in an order that corresponds to an order that segments of the backup data were received at a storage device associated with the computing device.
15. The non-transitory computer-readable storage medium of claim 13, wherein the storing includes removing one or more segments of the backup data to facilitate relocating the segments of the backup data.
16. The non-transitory computer-readable storage medium of claim 13, wherein the compacting further includes rearranging backup data from a second backup session of the first backup source from being stored non-contiguously to being stored as a contiguous group within the first storage container.
17. The non-transitory computer-readable storage medium of claim 16, wherein the first backup source corresponds to a first remote device.
18. The non-transitory computer-readable storage medium of claim 13, wherein the operations further comprise:
receiving a request to restore at least a portion of backup data; and
instantiating a plurality of threads, each executable in parallel to read a respective segment of the portion.