GPU Benchmarks and Hierarchy 2023: Graphics Cards Ranked

GPU Benchmarks and Performance Hierarchy
(Image credit: Tom's Hardware)

Our GPU benchmarks hierarchy ranks all the current and previous generation graphics cards by performance, and Tom's Hardware exhaustively benchmarks current and previous generation GPUs, including all of the best graphics cards. Whether it's playing games, running artificial intelligence workloads like Stable Diffusion, or doing professional video editing, your graphics card typically plays the biggest role in determining performance — even the best CPUs for Gaming take a secondary role.

We're retesting all of the ray-tracing capable GPUs on a slightly revamped test suite, using a Core i9-13900K instead of a Core i9-12900K. Our recent reviews use the updated test PC, but our hierarchy continues to use the older PC. The latest update to our GPU hierarchy is the addition of Intel's Johnny-come-lately Arc A580. It's a bit slower than the A750 and trades blows with the RX 6600, while using more power.

Our full GPU hierarchy using traditional rendering (aka, rasterization) comes first, and below that we have our ray tracing GPU benchmarks hierarchy. Those of course require a ray tracing capable GPU so only AMD's RX 7000/6000-series, Intel's Arc, and Nvidia's RTX cards are present. The results are all without enabling DLSS, FSR, or XeSS on the various cards, mind you.

November 2023 Update

There's a bit of a respite in new GPUs, as we haven't had anything launch since last month's Arc A580. We don't expect new cards until the Nvidia 40-series Super parts arrive, whenever that will be (January, probably).

Nvidia's Ada Lovelace architecture powers its latest generation RTX 40-series, with new features like DLSS 3 Frame Generation — and for all RTX cards, Nvidia DLSS 3.5 Ray Reconstruction is coming this fall. AMD's RDNA 3 architecture powers the RX 7000-series, with five desktop cards filling out the product stack. Meanwhile, Intel's Arc Alchemist architecture brings a third player into the dedicated GPU party, even if it's more of a competitor for the previous generation midrange offerings.

On page two, you'll find our 2020–2021 benchmark suite, which has all of the previous generation GPUs running our older test suite running on a Core i9-9900K testbed. It's no longer being actively updated. We also have the legacy GPU hierarchy (without benchmarks, sorted by theoretical performance) for reference purposes.

The following tables sort everything solely by our performance-based GPU gaming benchmarks, at 1080p "ultra" for the main suite and at 1080p "medium" for the DXR suite. Factors including price, graphics card power consumption, overall efficiency, and features aren't factored into the rankings here. The current 2022/2023 results use an Alder Lake Core i9-12900K testbed. Now let's hit the benchmarks and tables.

GPU Benchmarks Ranking 2023

For our latest benchmarks, we've tested nearly every GPU released in the past seven years, plus a few extras, at 1080p medium and 1080p ultra, and sorted the table by the 1080p ultra results. Where it makes sense, we also test at 1440p ultra and 4K ultra. All of the scores are scaled relative to the top-ranking 1080p ultra card, which in our new suite is the RTX 4090 — especially at 4K and 1440p.

You can also see the above summary chart showing the relative performance of the cards we've tested across the past several generations of hardware at 1080p ultra — swipe through the above gallery if you want to see the 1080p medium, 1440p and 4K ultra images. There are a few missing options (e.g., the GT 1030, RX 550, and several Titan cards), but otherwise it's basically complete. Note that we also have data in the table below for some of the other older GPUs.

The eight games we're using for our standard GPU benchmarks hierarchy are Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11 AMD/DX12 Intel/Nvidia), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The fps score is the geometric mean (equal weighting) of the eight games. Note that the specifications column links directly to our original review for the various GPUs.

GPU Rasterization Hierarchy, Key Takeaways

Swipe to scroll horizontally
Graphics CardLowest Price1080p Ultra1080p Medium1440p Ultra4K UltraSpecifications (Links to Review)
GeForce RTX 4090$2,394100.0% (151.6fps)100.0% (189.6fps)100.0% (143.1fps)100.0% (114.1fps)AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 7900 XTX$95997.3% (147.5fps)98.7% (187.2fps)93.4% (133.7fps)81.6% (93.0fps)Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W
GeForce RTX 4080$1,22994.0% (142.6fps)97.3% (184.4fps)90.1% (129.0fps)77.8% (88.7fps)AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W
Radeon RX 7900 XT$77993.1% (141.2fps)96.6% (183.0fps)86.9% (124.3fps)69.8% (79.6fps)Navi 31, 5736 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W
Radeon RX 6950 XT$80889.6% (135.8fps)98.9% (187.4fps)79.5% (113.7fps)59.3% (67.6fps)Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
GeForce RTX 4070 Ti$78189.3% (135.4fps)95.4% (180.9fps)80.5% (115.1fps)62.9% (71.8fps)AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
GeForce RTX 3090 Ti$1,89987.5% (132.6fps)94.3% (178.8fps)80.1% (114.7fps)67.0% (76.4fps)GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
Radeon RX 6900 XT$69987.0% (132.0fps)97.7% (185.3fps)75.9% (108.6fps)55.6% (63.5fps)Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 3090$1,25084.1% (127.6fps)92.7% (175.8fps)75.4% (107.9fps)62.3% (71.0fps)GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W
Radeon RX 6800 XT$48984.0% (127.3fps)95.6% (181.2fps)72.0% (103.0fps)52.1% (59.4fps)Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
Radeon RX 7800 XT$49983.9% (127.2fps)95.8% (181.5fps)72.7% (104.0fps)53.2% (60.7fps)Navi 32, 3840 shaders, 2430MHz, 16GB GDDR6@19.5Gbps, 624GB/s, 263W
GeForce RTX 3080 Ti$99983.1% (126.0fps)91.5% (173.4fps)74.0% (105.8fps)60.6% (69.1fps)GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
GeForce RTX 3080 12GB$85981.9% (124.2fps)90.2% (170.9fps)72.7% (104.0fps)58.7% (67.0fps)GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
GeForce RTX 4070$57481.5% (123.6fps)93.0% (176.3fps)69.1% (98.9fps)50.2% (57.2fps)AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
GeForce RTX 3080$1,03978.5% (119.0fps)89.2% (169.2fps)68.5% (98.1fps)54.7% (62.4fps)GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
Radeon RX 6800$38976.7% (116.3fps)91.8% (174.0fps)63.1% (90.2fps)44.6% (50.9fps)Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
Radeon RX 7700 XT$42975.0% (113.7fps)89.3% (169.4fps)63.5% (90.9fps)44.1% (50.2fps)Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W
GeForce RTX 3070 Ti$49969.8% (105.8fps)85.1% (161.3fps)59.0% (84.4fps)41.8% (47.7fps)GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6750 XT$37968.7% (104.2fps)87.0% (164.9fps)54.3% (77.7fps)37.5% (42.8fps)Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
GeForce RTX 4060 Ti 16GB$44967.2% (102.0fps)85.7% (162.5fps)52.9% (75.7fps)36.5% (41.6fps)AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060 Ti$37967.1% (101.7fps)84.3% (159.8fps)52.8% (75.5fps)34.7% (39.6fps)AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 3070$44966.3% (100.5fps)82.4% (156.2fps)55.2% (79.0fps)38.9% (44.4fps)GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
Radeon RX 6700 XT$30966.1% (100.3fps)84.7% (160.6fps)51.4% (73.5fps)35.3% (40.3fps)Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
Titan RTXRow 23 - Cell 1 65.5% (99.3fps)82.6% (156.6fps)55.6% (79.5fps)41.9% (47.8fps)TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GeForce RTX 2080 TiRow 24 - Cell 1 64.7% (98.1fps)81.2% (154.0fps)53.8% (77.0fps)39.4% (44.9fps)TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
GeForce RTX 3060 Ti$43460.9% (92.3fps)78.2% (148.2fps)49.6% (71.0fps) GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
GeForce RTX 2080 SuperRow 26 - Cell 1 57.3% (86.8fps)74.7% (141.7fps)46.0% (65.8fps)32.2% (36.7fps)TU104, 3072 shaders, 1815MHz, 8GB GDDR6@15.5Gbps, 496GB/s, 250W
Radeon RX 6700 10GB$28056.9% (86.2fps)76.5% (145.1fps)43.7% (62.6fps)28.9% (32.9fps)Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 4060$29956.0% (84.9fps)75.1% (142.3fps)42.8% (61.2fps)27.9% (31.9fps)AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GeForce RTX 2080Row 29 - Cell 1 55.1% (83.6fps)72.0% (136.5fps)43.9% (62.8fps) TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 7600$25954.3% (82.3fps)75.9% (143.9fps)40.0% (57.3fps)25.5% (29.1fps)Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W
Radeon RX 6650 XT$25052.7% (80.0fps)72.9% (138.2fps)39.5% (56.5fps) Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
GeForce RTX 2070 SuperRow 32 - Cell 1 51.6% (78.3fps)68.3% (129.5fps)40.6% (58.1fps) TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Radeon RX 6600 XT$24551.5% (78.1fps)71.6% (135.8fps)38.6% (55.2fps) Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
Intel Arc A770 16GB$27950.7% (76.9fps)61.4% (116.4fps)41.8% (59.8fps)30.9% (35.3fps)ACM-G10, 4096 shaders, 2400MHz, 16GB GDDR6@17.5Gbps, 560GB/s, 225W
Intel Arc A770 8GBN/A49.7% (75.3fps)60.9% (115.5fps)40.2% (57.5fps)29.1% (33.2fps)ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Radeon RX 5700 XTRow 36 - Cell 1 48.3% (73.3fps)65.9% (124.9fps)37.1% (53.1fps)25.6% (29.3fps)Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W
GeForce RTX 3060Row 37 - Cell 1 48.1% (73.0fps)63.8% (121.0fps)37.7% (54.0fps) GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
Intel Arc A750$17946.7% (70.8fps)58.2% (110.4fps)37.5% (53.7fps)27.3% (31.1fps)ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
GeForce RTX 2070Row 39 - Cell 1 46.4% (70.3fps)62.8% (119.0fps)36.1% (51.6fps) TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon VIIRow 40 - Cell 1 45.9% (69.5fps)60.1% (113.9fps)37.0% (53.0fps)27.6% (31.5fps)Vega 20, 3840 shaders, 1750MHz, 16GB HBM2@2.0Gbps, 1024GB/s, 300W
Radeon RX 6600$20944.4% (67.3fps)62.1% (117.7fps)32.6% (46.6fps) Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
GeForce GTX 1080 TiRow 42 - Cell 1 43.8% (66.4fps)58.1% (110.2fps)35.1% (50.2fps)25.9% (29.5fps)GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W
GeForce RTX 2060 SuperRow 43 - Cell 1 43.6% (66.2fps)59.0% (111.8fps)33.6% (48.1fps) TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Intel Arc A580Row 44 - Cell 1 42.9% (65.1fps)53.3% (101.1fps)34.1% (48.8fps)24.5% (27.9fps)ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W
Radeon RX 5700Row 45 - Cell 1 42.6% (64.5fps)58.4% (110.8fps)32.6% (46.7fps) Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W
Radeon RX 5600 XTRow 46 - Cell 1 38.1% (57.8fps)52.7% (100.0fps)29.4% (42.0fps) Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX Vega 64Row 47 - Cell 1 37.4% (56.7fps)49.7% (94.3fps)29.1% (41.6fps)20.6% (23.5fps)Vega 10, 4096 shaders, 1546MHz, 8GB HBM2@1.89Gbps, 484GB/s, 295W
GeForce RTX 2060Row 48 - Cell 1 36.9% (55.9fps)52.9% (100.2fps)27.9% (39.9fps) TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
GeForce GTX 1080Row 49 - Cell 1 35.0% (53.0fps)47.4% (89.9fps)27.6% (39.4fps) GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W
GeForce RTX 3050Row 50 - Cell 1 34.2% (51.9fps)46.8% (88.8fps)26.9% (38.5fps) GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1070 TiRow 51 - Cell 1 33.7% (51.1fps)45.2% (85.7fps)26.5% (37.9fps) GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W
Radeon RX Vega 56Row 52 - Cell 1 33.4% (50.6fps)44.4% (84.2fps)25.8% (37.0fps) Vega 10, 3584 shaders, 1471MHz, 8GB HBM2@1.6Gbps, 410GB/s, 210W
GeForce GTX 1660 SuperRow 53 - Cell 1 29.8% (45.3fps)43.5% (82.5fps)22.7% (32.5fps) TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W
GeForce GTX 1660 TiRow 54 - Cell 1 29.7% (45.0fps)43.3% (82.1fps)22.6% (32.3fps) TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W
GeForce GTX 1070Row 55 - Cell 1 29.5% (44.7fps)39.6% (75.0fps)23.1% (33.1fps) GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W
GeForce GTX 1660Row 56 - Cell 1 26.6% (40.3fps)39.4% (74.7fps)20.1% (28.7fps) TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 5500 XT 8GBRow 57 - Cell 1 26.2% (39.7fps)38.0% (72.1fps)19.7% (28.2fps) Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 590Row 58 - Cell 1 25.9% (39.3fps)36.2% (68.5fps)20.3% (29.0fps) Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W
GeForce GTX 980 TiRow 59 - Cell 1 23.7% (35.9fps)33.0% (62.6fps)18.6% (26.6fps) GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W
Radeon RX 580 8GBRow 60 - Cell 1 23.3% (35.3fps)32.6% (61.7fps)18.2% (26.0fps) Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W
Radeon R9 Fury XRow 61 - Cell 1 23.2% (35.2fps)33.7% (63.8fps)  Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W
GeForce GTX 1650 SuperRow 62 - Cell 1 22.3% (33.9fps)35.7% (67.7fps)  TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W
Radeon RX 5500 XT 4GBRow 63 - Cell 1 22.0% (33.3fps)35.2% (66.8fps)  Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W
GeForce GTX 1060 6GBRow 64 - Cell 1 21.1% (32.1fps)30.4% (57.7fps)16.1% (23.0fps) GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W
Radeon RX 6500 XTRow 65 - Cell 1 20.2% (30.6fps)34.7% (65.8fps)12.6% (18.0fps) Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon R9 390Row 66 - Cell 1 19.6% (29.8fps)26.9% (51.1fps)  Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W
GeForce GTX 980Row 67 - Cell 1 19.1% (28.9fps)28.3% (53.6fps)  GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W
GeForce GTX 1650 GDDR6Row 68 - Cell 1 19.0% (28.8fps)29.9% (56.6fps)  TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W
Intel Arc A380$11918.7% (28.4fps)28.6% (54.3fps)13.6% (19.5fps) ACM-G11, 1024 shaders, 2450MHz, 6GB GDDR6@15.5Gbps, 186GB/s, 75W
Radeon RX 570 4GBRow 70 - Cell 1 18.5% (28.1fps)28.2% (53.6fps)13.9% (19.9fps) Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W
GeForce GTX 1650Row 71 - Cell 1 17.8% (27.0fps)27.1% (51.3fps)  TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W
GeForce GTX 970Row 72 - Cell 1 17.5% (26.5fps)25.9% (49.0fps)  GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W
Radeon RX 6400Row 73 - Cell 1 15.9% (24.1fps)27.0% (51.1fps)  Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W
GeForce GTX 1050 TiRow 74 - Cell 1 13.1% (19.8fps)20.0% (38.0fps)  GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W
GeForce GTX 1060 3GB*Row 75 - Cell 1  27.7% (52.5fps)  GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W
GeForce GTX 1630Row 76 - Cell 1 11.1% (16.9fps)17.8% (33.8fps)  TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W
Radeon RX 560 4GBRow 77 - Cell 1 9.7% (14.7fps)16.7% (31.7fps)  Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W
GeForce GTX 1050*Row 78 - Cell 1  15.7% (29.7fps)  GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W
Radeon RX 550 4GBRow 79 - Cell 1  10.3% (19.5fps)  Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W

*: GPU couldn't run all tests, so the overall score is slightly skewed at 1080p ultra.

While the RTX 4090 does technically take first place at 1080p ultra, it's the 1440p and especially 4K numbers that impress. It's only 3% faster than the next closest RX 7900 XTX at 1080p ultra, but that increases to 8% at 1440p and then 23% at 4K. Against the RTX 3090 Ti, it's also a major upgrade: 14% faster at 1080p, 27% faster at 1440p, and 51% faster at 4K.

(Note that the above fps numbers incorporate both the average and minimum fps into a single score — with the average given more weight than the 1% low fps.)

Again, keep in mind that we're not including any ray tracing or DLSS results in the above table, as we intend to use the same test suite with the same settings on all current and previous generation graphics cards. Since only RTX cards support DLSS (and RTX 40-series if you want DLSS 3), that would drastically limit which cards we could directly compare. You can see DLSS 2/3 and FSR 2 upscaling results in our RTX 4070 review if you want to check out the various upscaling modes might help.

Of course the RTX 4090 comes at a steep price, though it's not that much worse than the previous generation RTX 3090. In fact, we'd say it's a lot better in some respects, as the 3090 was only a minor improvement in performance compared to the 3080 at the time of launch, but with more than double the VRAM. Nvidia pulled out all the stops with the 4090, increasing the core counts, clock speeds, and power limits to push it beyond all contenders.

Stepping down from the RTX 4090, the RTX 4080 and RX 7900 XTX trade blows at higher resolutions, while CPU bottlenecks come into play at 1080p. We'll be switching to an i9-13900K in the near future, and you can see those results in our latest graphics card reviews — check the GeForce RTX 4060, GeForce RTX 4070, Radeon RX 7600, Radeon RX 7700 XT, and Radeon RX 7800 XT for examples.

Intel Arc A770 Limited Edition

(Image credit: Intel)

Outside of the latest releases from AMD and Nvidia, the RX 6000- and RTX 30-series chips still perform reasonably well and in some cases represent a better 'deal' — even though the hardware can be over two years old now. Intel's Arc GPUs also fall into this category and are something of a wild card.

We've been testing and retesting GPUs periodically, and the Arc chips running the latest drivers now complete all of our benchmarks without any major anomalies. (Minecraft was previously a problem, though Intel has finally sorted that out.) They're not great on efficiency, but overall performance and pricing for the A750 is quite good.

Turning to the previous generation GPUs, the RTX 20-series and GTX 16-series chips end up scattered throughout the results, along with the RX 5000-series. The general rule of thumb is that you get one or two "model upgrades" with the newer architectures, so for example the RTX 2080 Super comes in just below the RTX 3060 Ti, while the RX 5700 XT lands a few percent behind the RX 6600 XT.

Go back far enough and you can see how modern games at ultra settings severely punish cards that don't have more than 4GB VRAM. We've been saying for a few years now that 4GB was just scraping by, and these days we'd avoid buying anything with less than 8GB of VRAM — 12GB or more is desirable for a mainstream or high-end GPU. The GTX 1060 3GB and GTX 1050 actually failed to run some of our tests, which skews their results a bit, even though they do better at 1080p medium.

Now let's switch over to the ray tracing hierarchy.

Dying Light 2 settings and image quality comparisons

(Image credit: Techland)

Ray Tracing GPU Benchmarks Ranking 2023

Enabling ray tracing, particularly with demanding games like those we're using in our DXR test suite, can cause framerates to drop off a cliff. We're testing with "medium" and "ultra" ray tracing settings. Medium means using the medium graphics preset but turning on ray tracing effects (set to "medium" if that's an option; otherwise, "on"), while ultra turns on all of the RT options at more or less maximum quality.

Because ray tracing is so much more demanding, we're sorting these results by the 1080p medium scores. That's also because the RX 6500 XT and RX 6400 along with the Arc A380 basically can't handle ray tracing even at these settings, and testing at anything more than 1080p medium would be fruitless.

The five ray tracing games we're using are Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft — all of these use the DirectX 12 / DX12 Ultimate API. The fps score is the geometric mean (equal weighting) of the five games, and the percentage is scaled relative to the fastest GPU in the list, which again is the GeForce RTX 4090.

GPU Ray Tracing Hierarchy, Key Takeaways

  • Nvidia absolutely dominates in ray tracing performance, with the RTX 4090 nearly doubling AMD's best AMD RX 7900 XTX in sixth place. Intel's Arc A770 lands at number 28.
  • DLSS 2 upscaling with quality mode is supported in most ray tracing games and can boost performance an additional 30~50 percent (depending on the game, resolution, and settings used). FSR 2 and XeSS support can provide a similar uplift, but FSR 2 is only in about a third as many games right now, and XeSS support is even less common.
  • You'll need an RTX 4070 or RTX 3080 or faster GPU to handle 1080p with maxed out settings at 60 fps or more, which means Performance mode upscaling can make 4K viable.
  • RTX 4080 again ranks as the most efficient GPU, followed by the RTX 4090, RTX 4070, RTX 4070 Ti, RTX 4060 Ti, and RTX 4060. Even the RTX 3060, 3060 Ti, and 3070 rank ahead of AMD's best, which again is the RX 7900 XT. Intel's Arc GPUs are still pretty far down the efficiency list, though in DXR they're often better than AMD's RX 6000-series parts.
  • The best overall ray tracing "value" in FPS per dollar goes to the RTX 4060, followed by the Arc A750. Some of Nvidia's 3060/3070 cards are on clearance and also rank higher, though supplies will likely dry up soon. For DXR, AMD's best value is the RX 6700 10GB.
Swipe to scroll horizontally
Tom's Hardware Ray Tracing GPU Benchmarks Hierarchy
Graphics Card1080p Medium1080p Ultra1440p Ultra4K UltraSpecifications (Links to Review)
GeForce RTX 4090100.0% (159.9fps)100.0% (132.7fps)100.0% (97.8fps)100.0% (53.5fps)AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 408083.1% (132.9fps)78.9% (104.8fps)72.9% (71.3fps)68.6% (36.7fps)AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W
GeForce RTX 3090 Ti71.6% (114.5fps)65.3% (86.7fps)61.1% (59.7fps)57.8% (30.9fps)GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 4070 Ti71.6% (114.4fps)64.8% (86.0fps)58.0% (56.8fps)52.9% (28.3fps)AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
GeForce RTX 309067.3% (107.6fps)59.3% (78.7fps)54.8% (53.6fps)50.9% (27.2fps)GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W
Radeon RX 7900 XTX67.2% (107.5fps)60.0% (79.7fps)54.0% (52.8fps)48.7% (26.1fps)Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W
GeForce RTX 3080 Ti65.7% (105.0fps)57.7% (76.6fps)53.3% (52.2fps)49.5% (26.5fps)GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W
GeForce RTX 3080 12GB64.5% (103.1fps)56.5% (75.0fps)51.8% (50.7fps)47.5% (25.4fps)GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W
Radeon RX 7900 XT60.9% (97.4fps)53.2% (70.5fps)47.0% (46.0fps)41.5% (22.2fps)Navi 31, 5736 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W
GeForce RTX 407060.7% (97.2fps)52.3% (69.4fps)46.3% (45.2fps)41.2% (22.0fps)AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
GeForce RTX 308059.3% (94.8fps)51.7% (68.7fps)47.3% (46.3fps)42.6% (22.8fps)GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W
GeForce RTX 3070 Ti50.1% (80.2fps)42.1% (55.8fps)37.0% (36.1fps) GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W
Radeon RX 6950 XT50.1% (80.1fps)42.5% (56.4fps)36.5% (35.7fps)32.3% (17.3fps)Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W
Radeon RX 7800 XT47.4% (75.8fps)41.3% (54.9fps)35.9% (35.1fps)31.9% (17.0fps)Navi 32, 3840 shaders, 2430MHz, 16GB GDDR6@19.5Gbps, 624GB/s, 263W
Radeon RX 6900 XT47.1% (75.4fps)39.4% (52.3fps)34.1% (33.3fps)30.0% (16.1fps)Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
GeForce RTX 4060 Ti46.9% (75.1fps)39.8% (52.8fps)34.3% (33.5fps)25.9% (13.9fps)AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 307046.8% (74.9fps)39.3% (52.2fps)34.2% (33.5fps) GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W
GeForce RTX 4060 Ti 16GB46.8% (74.9fps)39.9% (53.0fps)34.7% (34.0fps)30.8% (16.5fps)AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
Titan RTX46.5% (74.4fps)40.1% (53.3fps)35.8% (35.0fps)32.5% (17.4fps)TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GeForce RTX 2080 Ti44.3% (70.9fps)38.2% (50.7fps)33.6% (32.9fps) TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W
Radeon RX 6800 XT43.7% (70.0fps)36.5% (48.5fps)31.8% (31.1fps)28.0% (15.0fps)Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W
Radeon RX 7700 XT41.9% (67.0fps)36.6% (48.6fps)31.8% (31.1fps)27.6% (14.8fps)Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245W
GeForce RTX 3060 Ti41.7% (66.7fps)34.8% (46.2fps)30.1% (29.5fps) GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W
Radeon RX 680037.6% (60.1fps)31.0% (41.2fps)26.9% (26.3fps) Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W
GeForce RTX 2080 Super37.2% (59.4fps)31.7% (42.0fps)27.7% (27.1fps) TU104, 3072 shaders, 1815MHz, 8GB GDDR6@15.5Gbps, 496GB/s, 250W
GeForce RTX 406036.8% (58.8fps)31.4% (41.7fps)26.4% (25.8fps) AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GeForce RTX 208035.7% (57.1fps)29.9% (39.7fps)26.1% (25.5fps) TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A770 8GB33.9% (54.2fps)29.1% (38.7fps)25.5% (24.9fps) ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Intel Arc A770 16GB33.8% (54.1fps)29.1% (38.6fps)26.8% (26.2fps) ACM-G10, 4096 shaders, 2400MHz, 16GB GDDR6@17.5Gbps, 560GB/s, 225W
GeForce RTX 2070 Super32.8% (52.4fps)27.5% (36.6fps)23.6% (23.1fps) TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W
Intel Arc A75031.9% (51.0fps)27.5% (36.6fps)24.1% (23.5fps) ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W
Radeon RX 6750 XT31.1% (49.8fps)26.0% (34.5fps)22.0% (21.5fps) Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W
GeForce RTX 306030.9% (49.4fps)25.7% (34.1fps)22.0% (21.5fps) GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W
Radeon RX 6700 XT29.1% (46.6fps)24.3% (32.3fps)20.3% (19.9fps) Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W
GeForce RTX 207029.0% (46.3fps)24.2% (32.1fps)20.9% (20.4fps) TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Intel Arc A58028.5% (45.6fps)24.6% (32.7fps)21.6% (21.1fps) ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185W
GeForce RTX 2060 Super27.8% (44.5fps)23.0% (30.5fps)19.7% (19.3fps) TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W
Radeon RX 6700 10GB26.8% (42.9fps)22.0% (29.2fps)17.9% (17.5fps) Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 206024.0% (38.4fps)19.1% (25.4fps)  TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W
Radeon RX 760023.9% (38.3fps)19.4% (25.7fps)15.6% (15.2fps) Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165W
Radeon RX 6650 XT23.5% (37.6fps)19.3% (25.6fps)  Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W
Radeon RX 6600 XT22.9% (36.7fps)18.7% (24.8fps)  Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W
GeForce RTX 305022.0% (35.1fps)18.2% (24.1fps)  GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W
Radeon RX 660019.2% (30.8fps)15.6% (20.7fps)  Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W
Intel Arc A38011.4% (18.3fps)   ACM-G11, 1024 shaders, 2450MHz, 6GB GDDR6@15.5Gbps, 186GB/s, 75W
Radeon RX 6500 XT6.2% (9.9fps)   Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W
Radeon RX 64005.2% (8.3fps)   Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W

If you felt the RTX 4090 performance was impressive at 4K in our standard test suite, just take a look at the results with ray tracing. Nvidia put even more ray tracing enhancements into the Ada Lovelace architecture, and those start to show up here. There are still further potential performance improvements for ray tracing with SER, OMM, and DMM — not to mention DLSS3, though that ends up being a bit of a mixed bag, since the generated frames don't include new user input and add latency.

If you want a real kick in the pants, we ran many of the faster ray tracing GPUs through Cyberpunk 2077's RT Overdrive mode, which implements full "path tracing" (full ray tracing, without any rasterization). That provides a glimpse of how future games could behave, and why upscaling and AI techniques like Frame Generation are here to stay.

Even at 1080p medium, a relatively tame setting for DXR (DirectX Raytracing), the RTX 4090 roars past all contenders and leads the previous generation RTX 3090 Ti by 41%. At 1080p ultra, the lead grows to 53%, and it's nearly 64% at 1440p. Nvidia made claims before the RTX 4090 launch that it was "2x to 4x faster than the RTX 3090 Ti" — factoring in DLSS 3's Frame Generation technology — but even without DLSS 3, the 4090 is 72% faster than the 3090 Ti at 4K.

AMD continues to relegate DXR and ray tracing to secondary status, focusing more on improving rasterization performance — and on reducing manufacturing costs through the use of chiplets on the new RDNA 3 GPUs. As such, the ray tracing performance from AMD isn't particularly impressive. The new RX 7900 XTX basically matches Nvidia's previous generation RTX 3080 Ti, which puts it just a bit behind the RTX 3090 — and Nvidia's 4070 Ti outpaces it by 7–9 percent on average across our test suite. The step down RX 7900 XT meanwhile lands around the level of the RTX 4070. There are some minor improvements for RT performance in RDNA 3, though, as the 7800 XT for example ends up basically tied with the RX 6800 XT in rasterization performance but is 10% faster in DXR performance.

Intel's Arc A7-series parts show a decent blend of performance in general, with the A750 coming in ahead of the RTX 3060 overall. With the latest drivers (and with vsync forced off in the options.txt file), Minecraft performance also looks much more in line with the other Arc DXR results.

Nvidia GeForce RTX 4090 Founders Edition

(Image credit: Tom's Hardware)

You can also see what DLSS Quality mode did for performance in DXR games on the RTX 4090 in our review, but the short summary is that it boosted performance by 78% at 4K ultra. DLSS 3 meanwhile improved framerates another 30% to 100% in our preview testing, though we recommend exercising caution when looking at FPS with Frame Generation enabled. It can dramatically boost frame rates in benchmarks, but when actually playing games it often doesn't feel much faster than without the feature. Overall, with DLSS 2, the 4090 in our ray tracing test suite is nearly four times as fast as AMD's RX 7900 XTX. Ouch.

AMD's FSR 2.0 would prove beneficial here, if AMD can get widespread adoption, but it still trails DLSS. Right now, only one of the games in our DXR suite (Cyberpunk 2077) has FSR2 support, while three more from our rasterization suite support FSR2. By comparison, all of the DXR games we're testing support DLSS2, plus another five from our rasterization suite — and three of the games even support DLSS3.

Without FSR2, AMD's fastest GPUs can only clear 60 fps at 1080p ultra, while remaining decently playable at 1440p with 40–50 fps on average. But native 4K DXR remains out of reach for just about every GPU, with only the 3090 Ti, 4080, and 4090 breaking the 30 fps mark on the composite score — and a couple of games still come up short on the 4080 and 3090 Ti.

AMD also has FSR3 coming soon, providing for frame generation. Like DLSS3, it will add latency, and AMD requires the integration of Anti-Lag+ support in games that use FSR3. But Anti-Lag+ only works with AMD GPUs, which means non-AMD cards will likely incur a rather large latency penalty.

The midrange GPUs like the RTX 3070 and RX 6700 XT basically manage 1080p ultra and not much more, while the bottom tier of DXR-capable GPUs barely manage 1080p medium — and the RX 6500 XT can't even do that, with single digit framerates in most of our test suite, and one game that wouldn't even work at our chosen "medium" settings. (Control requires at least 6GB VRAM to let you enable ray tracing.)

Intel's Arc A380 ends up just ahead of the RX 6500 XT in ray tracing performance, which is interesting considering it only has 8 RTUs going up against AMD's 16 Ray Accelerators. Intel posted a deep dive into its ray tracing hardware, and Arc sounds reasonably impressive, except for the fact that the number of RTUs in the A380 severely limits performance. The top-end A770 still only has 32 RTUs, which proves sufficient for it to pull ahead (barely) of the RTX 3060 in DXR testing, but it can't go much further than that. Arc A770 also ends up ahead of AMD's RX 6800 in DXR performance, showing just how poor AMD's RDNA 2 hardware is when it comes to ray tracing.

It's also interesting to look at the generational performance of Nvidia's RTX cards. The slowest 20-series GPU, the RTX 2060, still outperforms the new RTX 3050 by a bit, but the fastest RTX 2080 Ti comes in a bit behind the RTX 3070. Where the 2080 Ti basically doubled the performance of the 2060, the 3090 delivers about triple the performance of the 3050. Hopefully a future RTX 4050 will deliver similar gains as the 4090, at a far more affordable price point.

2023 GPU Testbed

(Image credit: Tom's Hardware)

Test System and How We Test for GPU Benchmarks

We've used two different PCs for our testing. The latest 2022/2023 configuration uses an Alder Lake CPU and platform (with Raptor Lake results coming soon), while our previous testbed uses Coffee Lake and Z390. Here are the details of the two PCs.

Tom's Hardware 2022–2023 GPU Testbed

Intel Core i9-12900K
MSI Pro Z690-A WiFi DDR4
Corsair 2x16GB DDR4-3600 CL16
Crucial P5 Plus 2TB
Cooler Master MWE 1250 V2 Gold
Cooler Master PL360 Flux
Cooler Master HAF500
Windows 11 Pro 64-bit

Tom's Hardware 2020–2021 GPU Testbed

Intel Core i9-9900K
Corsair H150i Pro RGB
MSI MEG Z390 Ace
Corsair 2x16GB DDR4-3200
XPG SX8200 Pro 2TB
Windows 10 Pro (21H1)

For each graphics card, we follow the same testing procedure. We run one pass of each benchmark to "warm up" the GPU after launching the game, then run at least two passes at each setting/resolution combination. If the two runs are basically identical (within 0.5% or less difference), we use the faster of the two runs. If there's more than a small difference, we run the test at least twice more to determine what "normal" performance is supposed to be.

We also look at all the data and check for anomalies, so for example RTX 3070 Ti, RTX 3070, and RTX 3060 Ti all generally going to perform within a narrow range — 3070 Ti is about 5% faster than 3070, which is about 5% faster than 3060 Ti. If we see games where there are clear outliers (i.e. performance is more than 10% higher for the cards just mentioned), we'll go back and retest whatever cards are showing the anomaly and figure out what the "correct" result would be.

Due to the length of time required for testing each GPU, updated drivers and game patches inevitably will come out that can impact performance. We periodically retest a few sample cards to verify our results are still valid, and if not, we go through and retest the affected game(s) and GPU(s). We may also add games to our test suite over the coming year, if one comes out that is popular and conducive to testing — see our what makes a good game benchmark for our selection criteria.

GPU Benchmarks: Individual Game Charts

The above tables provide a summary of performance, but for those that want to see the individual game charts, for both the standard and ray tracing test suites, we've got those as well. We're only including more recent GPUs in these charts, as otherwise things get very messy. These are also using our new test PC, which changes the performance slightly from the above table, simply because our newest tests are more relevant (but haven't been run on a lot of the older GPUs shown in the tables).

These charts are up to date as of October 10, 2023.

GPU Benchmarks Hierarchy — 1080p Medium

GPU Benchmarks Hierarchy — 1080p Ultra