-
Updated
Jan 16, 2026 - C++
#
gpu-benchmarks
Here are 5 public repositories matching this topic...
aws benchmark hpc gpu cuda rdma efa rdma-benchmarks large-language-models llm cpp20-coroutine gpu-benchmarks
⚡ Compare AI models by Accuracy × Cost × Carbon — RTX 5090 benchmarks reveal 4-bit quantization wastes energy on small models
open-source quantization energy-efficiency carbon-footprint mlops carbon-calculator green-ai sustainable-ai climate-tech llm-evaluation deepseek rtx-5090 ai-sustainability gpu-benchmarks
-
Updated
Apr 5, 2026 - TypeScript
Benchmark results and performance data for the Intel Arc Pro B70 GPU (Xe2/Battlemage) - LLM inference, video generation, dual-GPU scaling.
sycl xe2 intel-gpu ai-inference intel-arc llama-cpp llm-inference battlemage gpu-benchmarks arc-pro-b70
-
Updated
Apr 21, 2026
Optimal GPU, VRAM, and RAM configurations for running DeepSeek R1 locally (7B to 671B models).
-
Updated
Feb 18, 2026
Improve this page
Add a description, image, and links to the gpu-benchmarks topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu-benchmarks topic, visit your repo's landing page and select "manage topics."