🚀 Elevate Your Computing Game!
The HPE NVIDIA Tesla P40 is a powerful 24GB GPU designed for desktop systems, featuring a PCI Express interface and GDDR6 RAM, making it ideal for demanding applications in AI, machine learning, and data science. This renewed model provides exceptional performance and reliability at an attractive price point.
Compatible Devices | Desktop |
Graphics Card Interface | PCI Express |
Graphics Ram Type | GDDR6 |
Graphics Coprocessor | NVIDIA Tesla P40 |
Graphics Card Ram | 24 GB |
C**W
A 24GB GPU for less than $200
Much slower than a modern GPU but also much cheaper. Performs about as well as you can expect at this price and for that, it earns my five star stamp of approval.
W**.
Low-cost generative AI entry point
The Nvidia Tesla P40 is a datacenter-class GPU with 24 GB VRAM first introduced in 2016. I have been able to use my P40 to run Stable Diffusion, Whisper speech-to-text, Coqu.AI text-to-speech, and a variety of local large language models. It has CUDA compute level 6.1. I have the CUDA toolkit 12.1 and am using the Nvidia 5.35 driver with Ubuntu 22.The P40 does not have tensor cores or other features found on more recent GPUs. I consistently get practical benchmarks indicating it gives about 1/2 the performance of, say, an NVIDIA RTX 4090 card for inference. Its main distinction is that you can get these cards used or refurbished for about the tenth the cost of a 4090, and they will let you use software that requires an NVIDIA GPU.These were designed for passive cooling in servers. This means if you are looking to run them in consumer-grade hardware, there are a number of challenges you must overcome. First, the power connector is an 8-pin CPU power type: PCI-e power cables as are typical for consumer GPUs have different wiring and you could damage a P40 if you get this wrong. Second, they will use 250W at full load, so your power supply has to be able to handle that. Third, you will need additional cooling. I used a fan shroud with fan that fits on the end of the P40. Fourth, these cards require the 'Above 4G Decoding' BIOS setting (or 'Resizable BAR') and may prevent the host computer from booting if it is not set correctly. Many older workstations do not support this in BIOS. Fifth, getting the driver and CUDA software setup correct tends to be difficult in itself.The bottom line is that if you need the greatest economy to be able to use generative AI and are willing to deal with the issues that come with these cards, the P40 is a good solution. If convenience is what you value, then you should keep looking for other options.
K**T
Didn’t work.
Didn’t come with the power plug. Once I bought one still didn’t work. Server won’t post.
C**1
Missing fan plate shown in description
Used, clean.Not as shown in photo, missing cooling fan adapter.
D**.
Useful for LLM training.
The 4090's are tasked with graphical AI tasks so I picked up a couple of these P40's. Needed the 24GB VRAM on the LLaMa projects. The card arrived in great working condition, was an easy install. I had to use a 3D printer to make a custom housing for the fan. These cards get HOT.
P**R
This one declined to work
Was unsuccessful to launch it. Too cheap to return it back. So I have another expensive brick in the wall.
Trustpilot
Hace 1 día
Hace 4 días