- Tesla’s in-house supercomputer has 1,600 more graphics processing units (GPUs) than a year ago.
- The machine now has 7,360 A100 chips, which are designed for data centre servers.
- Tesla may be just getting started with its plans for high-performance computing.
The number of GPUs in Tesla in-house supercomputer has gone up by 1,600, which is a 28% increase over what was said a year ago.
Tim Zaman, who is in charge of engineering at Tesla, says that this would make the machine seventh in the world in terms of the number of GPUs it has.
The machine now has a total of 7,360 Nvidia A100 GPUs, which are designed for data centre servers but use the same architecture as the company’s best GeForce RTX 30-series cards.
Telsa supercomputer upgraded
Right now, Tesla probably needs all the processing power it can get. The company is currently working on “neural nets,” which are used to process the huge amounts of video data that the company’s cars collect.
With this latest upgrade, Tesla may be just getting started with its plans for high-performance computing (HPC).
Elon Musk claimed in June 2020, “Tesla is creating a neural net training computer dubbed Dojo.” He said the machine could do over 1 exaFLOPs, or 1,000 petaFLOPs.
If the machine could do 1 exaFLOPS, it would be a supercomputer. Only a few supercomputers, notably The Frontier in Tennessee, have passed the exascale barrier.
You could build a new computer, too. Musk tweeted, “Consider joining our AI or computer/chip teams if this sounds interesting.”
Dojo won’t use Nvidia hardware. Tesla’s D1 Dojo Chip will power the machine. At AI Day, the automaker revealed the chip may have 362 TFLOPs.
Britain plans self-driving cars by 2025
Britain aims to have a widespread implementation of driverless vehicles on roads…