Post History
In addition, welding in general doesn't require high voltage, but rather high current. As Olin explains, the high voltage is only needed to get the shielding gas ionized into a plasma state - then...
Answer
#1: Initial revision
In addition, welding in general doesn't require high voltage, but rather high _current_. As Olin explains, the high voltage is only needed to get the shielding gas ionized into a plasma state - then the high current is what keeps it going and hot. So most welders intentionally reduce the incoming electricity to a much lower _voltage_, while simultaneously increasing the _current_. Power = Voltage * Current; when the voltage is reduced, the current can increase for the same power. Since the resistance of a plasma arc is small, it makes sense that more _current_ is needed than voltage. This is how a welder can draw and deliver 240 VAC * 20 A = 4.8kW --> 4.8kW / 40VDC = 120 ADC into the workpiece. The welder is really a power and physics converter - it is converting 4.8kW of AC input power into a DC plasma (heat source) at the workpiece. Now in stick (SMAW) welding, the "open" voltage can matter. If the "open" voltage is too low, it can be difficult to start and maintain an arc. I forget which, but one of the rod types requires a higher open-circuit voltage than the others. To use that rod type, the welder must support it (usually bigger models.) Tungsten (TIG) welding is another interesting one. These come in "scratch or lift-start" (lift tip from workpiece to start, just like stick) or "HF start" models. HF start includes a high-voltage "sparker" to start the arc with the tip several mm from the work, saving wear-and-tear on the tungsten. After the arc is started though, the same rules as stick welding apply - which is why most TIG units can also do stick. Interestingly, raising the tip away from the work while the arc is established causes the arc voltage to increase. The result isn't desirable because it puts more thermal power into the workpiece, over a larger area.