1461182510-f322a513-539b-460b-9613-93a4cbbc7c6a

1. A digital device for maintaining a state of a precharged dot line, periodically precharged by a global precharge signal, comprising:
a. a data input signal that can have a selected one of a first value and a second value, the first value being a value that would be reflected by the dot line being in a charged state;
b. a precharge circuit, responsive to a global precharge signal, that is configured to precharge the dot line;
c. a guaranteed write through logic device, responsive to the data input signal, that ensures that charge is applied to the dot line whenever the data signal has the first value; and
d. a guaranteed write through inhibitor, responsive to a write through gate signal, that is configured to inhibit selectively the guaranteed write through logic device from applying charge to the dot line when the write through gate signal is in a guarantee inhibit state.
2. The digital device of claim 1, wherein the precharge circuit comprises a transistor having a source coupled to a voltage source, a drain coupled to the dot line and a gate coupled to the global precharge signal, so that the transistor to enters a conducting state when the global precharge signal is in a precharge state.
3. The digital device of claim 2, wherein the guaranteed write through inhibitor comprises:
a. an inverter that receives input from the data input signal;
b. a NAND gate that receives input from the inverter and the write through gate signal; and
c. an AND gate that receives input from the NAND gate and the global precharge signal.
4. The digital device of claim 1, wherein the write through gate signal is in a guarantee inhibit state when the write through gate signal has a logical \u201c0\u201d value.
5. A static read only memory with write-through capability, comprising:
a. a memory cell configured to store a bit of data;
b. an enable signal configured to enable writing a value from an input into the memory cell and to enable reading a value from the memory cell onto a dot line;
c. a write-through circuit that allows a value being written into the memory cell to be read at the dot line in a single clock cycle;
d. a precharge circuit configured to precharge the dot line to a predetermined value when the dot line is not being read, the precharge circuit including a transistor having a source coupled to a voltage source, a drain coupled to the dot line, and a gate that causes the dot line to be coupled to a voltage source when the transistor is in a conducting state;
e. a guaranteed write through logic device configured to drive the transistor into a conducting state and recharge the dot line when a current state of the memory cell and a current value of the input causes the dot line to discharge prematurely and when the current state of the input corresponds to a state in which the dot line should be charged, and
f. a guaranteed write through inhibitor, responsive to a write through gate signal, that is configured to inhibit selectively the guaranteed write through logic device from applying charge to the dot line when the write through gate signal is in a guarantee inhibit state.
6. The digital device of claim 5, wherein the transistor comprises a p-type field effect transistor and wherein the guaranteed write through inhibitor comprises:
a. an inverter that receives input from the data input signal;
b. a NAND gate that receives input from the inverter and the write through gate signal; and
c. wherein the logic gate comprises an AND gate that receives input from the NAND gate and the global precharge signal, the precharge signal having a logic \u201c0\u201d state when the dot line is to be precharged and the data input signal having a logic \u201c0\u201d state when a logic \u201c1\u201d is to be written to the dot line.
7. The digital device of claim 5, wherein the write through gate signal is in a guarantee inhibit state when the write through gate signal has a logical \u201c0\u201d value.
8. A method of ensuring that a precharged dot line, that is coupled to an output from an SRAM cell having a write-through capability, can recover from a premature discharge, the SRAM configured to store a value indicated by a data input signal and including a precharge circuit that causes the dot line to be precharged when a precharge signal is asserted, the method comprising the actions of:
a. asserting a charge signal onto the dot line when either the precharge signal has been asserted or the data input signal has a value that would cause the SRAM cell to store a logical \u201c1\u201d;
b. coupling the dot line to a charge source when the charge signal is asserted if a write through gate signal is in a guaranteed write through enable state; and
c. preventing coupling of the dot line to a charge source when the charge signal is asserted if the write through gate signal is in a guarantee inhibit state.
9. The method of claim 8 wherein the write through gate signal is in a guarantee inhibit state when the write through gate signal has a logical \u201c0\u201d value.
10. The method of claim 9, wherein the data input signal is a complement of a value that is being written to the SRAM cell and wherein the precharge signal has a logical \u201c0\u201d state when the dot line is to be precharged, method further comprising the actions of:
a. inverting the data input signal, thereby generating an inverted signal;
b. NAND’ing the inverted signal with the write through gate signal, thereby generating a NAND’ed signal;
c. AND’ing the NAND’ed signal with the global precharge signal, thereby generating an AND’ed signal; and
d. driving a gate of a p-type field effect transistor that selectively couples the dot line to a voltage source with the AND’ed signal.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A man-hour management system which manages man-hours for producing automobiles, comprising:
a server, a database, connection terminals, and an Ethernet;
said database comprising:
a walk man-hour conversion table for performing registration management of standardized man-hours for walks which are generated by works;
a work constituent condition table for performing registration management of constituent works for use in managing the man-hours, end having conditions for each of the constituent works;
a standardized man-hour table for performing registration management of standardized man-hour analysis contents and standardized man-hours for the constituent works or the constituent work conditions which are under the registration management of said work constituent condition table;
a main man-hour management table for managing item data for constituent works in process units and for performing registration management andor reorganization management of constituent work items in the units of processes, data being assigned to the constituent work items from said walk awn-hour conversion table, said work constituent condition table and said standardized man-hour table, or data being inputted and set to the constituent work items; and
a process name table for performing registration management andor reorganization management of names of the processes; and
said man-hour management system further including man-hour output means including a man-hour output system program, a timing graph output program, a process balancing table output program, a net & loss aggregation table output program, an individual-process specification aggregation table output program, a history management table output program, and a main man-hour management output program, for outputting man-hour information by being assigned data from said main man-hour management table and said process name table.
2. A man-hour management system according to claim 1, comprising a change history table for performing save management of work change contents in units of the processes;
wherein said man-hour output means outputs the man-hour information by being assigned data also from said change history table.
3. A man-hour management system according to claim 1, comprising a timing graph data table for performing registration management of data of a timing graph, data being assigned to said timing graph data table from said main man-hour management table;
wherein said man-hour output means outputs the man-hour information by being assigned data also from said timing graph data table.
4. A man-hour management system according to claim 1, comprising a line name table for performing registration management of modes of lines which implement works;
wherein said main man-hour management table is assigned data also from said line name table.
5. A man-hour management system according to claim 1, comprising a series table for performing registration management of series and types associated with the series;
wherein said main man-hour management table is assigned data also from said series table.
6. A man-hour management system according to claim 1, comprising a derivation table for performing registration management of derivatives associated with each of the series and types;
wherein said main man-hour management table is assigned data also from said derivation table.
7. A man-hour management system according to claim 1, comprising:
a database in which the tables are stored; and
series data backup means for extracting the data of said tables in series units as have become unnecessary, from said database, and for re-storing said data of said tables extracted in series units, in said database.
8. A man-hour management system according to claim 1, wherein the constituent work has its each movement classified into one of a main movement, an auxiliary movement and a quasi movement, and standardized man-hours analyzed are set for said each movement.

1461182499-d5dd00a8-bd4f-495e-aa82-035d4eba233e

1. A method of autonomic scaling of virtual machines in a cloud computing environment, the cloud computing environment comprising a plurality of virtual machines (\u2018VMs\u2019), the VMs comprising modules of automated computing machinery installed upon cloud computers disposed within a data center, the cloud computing environment further comprising a cloud operating system and a data center administration server operably coupled to the VMs, the method comprising:
deploying, by the cloud operating system, an instance of a VM, including flagging the instance of a VM for autonomic scaling and executing a data processing workload on the instance of a VM;
monitoring, by the cloud operating system, one or more operating characteristics of the instance of the VM;
deploying, by the cloud operating system, an additional instance of the VM if a value of an operating characteristic exceeds a first predetermined threshold value, including executing a portion of the data processing workload on the additional instance of the VM; and
terminating operation of the additional instance of the VM if a value of an operating characteristic declines below a second predetermined threshold value.
2. The method of claim 1 wherein:
the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine;
the method further comprises receiving, through a user interface exposed by the self service portal, user specifications of the VM, the user specifications including specifications of computer processors, random access memory, hard disk storage, inputoutput resources, application programs, and a specification for autonomic scaling that includes predetermined threshold values for operating characteristics; and
deploying an instance of a VM further comprises deploying the instance of a VM in the cloud computing environment in accordance with the received user specifications.
3. The method of claim 1 wherein flagging the instance of a VM for autonomic scaling includes storing, in the cloud operating system in association with an identifier of the instance of a VM, predetermined threshold values of operating characteristics, including threshold values for processor utilization and for memory utilization.
4. The method of claim 1 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine, and deploying an instance of a VM further comprises:
passing by the self service portal user specifications for the instance of a VM to the deployment engine;
implementing and passing to the data center administration server, by the deployment engine, a VM template with the user specifications; and
calling, by the data center administration server, a hypervisor on a cloud computer to install the VM template as an instance of a VM on the cloud computer.
5. The method of claim 1 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine, and deploying an additional instance of the VM further comprises:
passing by the self service portal user specifications for the additional instance of the VM to the deployment engine;
implementing and passing to the data center administration server, by the deployment engine with the user specifications, a same VM template used to deploy a previous instance of the VM; and
calling, by the data center administration server, a hypervisor on a cloud computer to install the same VM template as a additional instance of the VM on a cloud computer.
6. The method of claim 1 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal, and terminating operation of the additional instance of the VM further comprises:
transmitting, by the self service portal to the data center administration server, an instruction to terminate the additional instance of the VM; and
calling, by the data center administration server, a hypervisor on a cloud computer to terminate operation of the additional instance of the VM on the cloud computer.
7. Apparatus for autonomic scaling of virtual machines in a cloud computing environment, the apparatus comprising:
a plurality of virtual machines (\u2018VMs\u2019), the VMs comprising modules of automated computing machinery installed upon cloud computers disposed within a data center;
a cloud operating system;
a data center administration server operably coupled to the VMs,
at least one computer processor; and
a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions which when executed cause the apparatus to function by:
deploying, by the cloud operating system, an instance of a VM, including flagging the instance of a VM for autonomic scaling and executing a data processing workload on the instance of a VM;
monitoring, by the cloud operating system, one or more operating characteristics of the instance of the VM;
deploying, by the cloud operating system, an additional instance of the VM if a value of an operating characteristic exceeds a first predetermined threshold value, including executing a portion of the data processing workload on the additional instance of the VM; and
terminating operation of the additional instance of the VM if a value of an operating characteristic declines below a second predetermined threshold value.
8. The apparatus of claim 7 wherein:
the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine;
the computer program instructions further cause the apparatus to function by receiving, through a user interface exposed by the self service portal, user specifications of the VM, the user specifications including specifications of computer processors, random access memory, hard disk storage, inputoutput resources, application programs, and a specification for autonomic scaling that includes predetermined threshold values for operating characteristics; and
deploying an instance of a VM further comprises deploying the instance of a VM in the cloud computing environment in accordance with the received user specifications.
9. The apparatus of claim 7 wherein flagging the instance of a VM for autonomic scaling includes storing, in the cloud operating system in association with an identifier of the instance of a VM, predetermined threshold values of operating characteristics, including threshold values for processor utilization and for memory utilization.
10. The apparatus of claim 7 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine, and deploying an instance of a VM further comprises:
passing by the self service portal user specifications for the instance of a VM to the deployment engine;
implementing and passing to the data center administration server, by the deployment engine, a VM template with the user specifications; and
calling, by the data center administration server, a hypervisor on a cloud computer to install the VM template as an instance of a VM on the cloud computer.
11. The apparatus of claim 7 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine, and deploying an additional instance of the VM further comprises:
passing by the self service portal user specifications for the additional instance of the VM to the deployment engine;
implementing and passing to the data center administration server, by the deployment engine with the user specifications, a same VM template used to deploy a previous instance of the VM; and
calling, by the data center administration server, a hypervisor on a cloud computer to install the same VM template as a additional instance of the VM on a cloud computer.
12. The apparatus of claim 7 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal, and terminating operation of the additional instance of the VM further comprises:
transmitting, by the self service portal to the data center administration server, an instruction to terminate the additional instance of the VM; and
calling, by the data center administration server, a hypervisor on a cloud computer to terminate operation of the additional instance of the VM on the cloud computer.
13. A computer program product for autonomic scaling of virtual machines in a cloud computing environment, the cloud computing environment comprising a plurality of virtual machines (\u2018VMs\u2019), the VMs comprising modules of automated computing machinery installed upon cloud computers disposed within a data center, a cloud operating system, a data center administration server operably coupled to the VMs, the computer program product disposed upon a computer readable storage medium, the computer program product comprising computer program instructions which when executed cause the VMs and computers in the cloud computing environment to function by:
deploying, by the cloud operating system, an instance of a VM, including flagging the instance of a VM for autonomic scaling and executing a data processing workload on the instance of a VM;
monitoring, by the cloud operating system, one or more operating characteristics of the instance of the VM;
deploying, by the cloud operating system, an additional instance of the VM if a value of an operating characteristic exceeds a first predetermined threshold value, including executing a portion of the data processing workload on the additional instance of the VM; and
terminating operation of the additional instance of the VM if a value of an operating characteristic declines below a second predetermined threshold value.
14. The computer program product of claim 13 wherein:
the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine;
the computer program instructions further cause the VMs and computers to function by receiving, through a user interface exposed by the self service portal, user specifications of the VM, the user specifications including specifications of computer processors, random access memory, hard disk storage, inputoutput resources, application programs, and a specification for autonomic scaling that includes predetermined threshold values for operating characteristics; and
deploying an instance of a VM further comprises deploying the instance of a VM in the cloud computing environment in accordance with the received user specifications.
15. The computer program product of claim 13 wherein flagging the instance of a VM for autonomic scaling includes storing, in the cloud operating system in association with an identifier of the instance of a VM, predetermined threshold values of operating characteristics, including threshold values for processor utilization and for memory utilization.
16. The computer program product of claim 13 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine, and deploying an instance of a VM further comprises:
passing by the self service portal user specifications for the instance of a VM to the deployment engine;
implementing and passing to the data center administration server, by the deployment engine, a VM template with the user specifications; and
calling, by the data center administration server, a hypervisor on a cloud computer to install the VM template as an instance of a VM on the cloud computer.
17. The computer program product of claim 13 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal and a deployment engine, and deploying an additional instance of the VM further comprises:
passing by the self service portal user specifications for the additional instance of the VM to the deployment engine;
implementing and passing to the data center administration server, by the deployment engine with the user specifications, a same VM template used to deploy a previous instance of the VM; and
calling, by the data center administration server, a hypervisor on a cloud computer to install the same VM template as a additional instance of the VM on a cloud computer.
18. The computer program product of claim 13 wherein the cloud operating system comprises a module of automated computing machinery, further comprising a self service portal, and terminating operation of the additional instance of the VM further comprises:
transmitting, by the self service portal to the data center administration server, an instruction to terminate the additional instance of the VM; and
calling, by the data center administration server, a hypervisor on a cloud computer to terminate operation of the additional instance of the VM on the cloud computer.

The claims below are in addition to those above.
All refrences to claim(s) which appear below refer to the numbering after this setence.

1. A device for increasing the volume of strand mats, comprising:
a first roller, around which the strand mats made with glass fibers are wound;
a volume-increaser that applies pressure to each of the strand mats supplied from the first roller so as to weaken the cohesion between the glass fibers and increase the volume of each of the strand mats;
a second roller, around which the volume-increased strand mats are wound;
wherein the volume-increaser comprises multiple compression rollers, between which each of the strand mats supplied from the first roller is passed in a state of being compressed,
and wherein at least one of the compression rollers comprises a driving roller which is rotated by a motor, and a gap adjusting roller which is provided at an upper side of the driving roller to be moved upward and downward.
2. The device as set forth in claim 1, wherein the compression rollers are formed with gear teeth.
3. The device as set forth in claim 1, wherein the gap adjusting roller is installed in a slider to be rotated, the slider being installed in a support in which the driving roller is installed, to be moved upward and downward, and being raised and lowered by a hydraulic cylinder installed in an upper side of the support.