Lambda presents an inference benchmark of Stable Diffusion model with different GPUs and CPUs. | Continue reading
Stable Diffusion is great at many things, but not great at everything, and getting results in a particular style or appearance often involves a lot of work "prompt engineering". If you have a particular type of image you'd like to generate, then an alternative to spending a long … | Continue reading
TLDR:While waiting for NVIDIA's next-generation consumer and professional GPUs, we decided to write a blog about the best GPU for Deep Learning currently available as of March 2022. For readers who use pre-Ampere generation GPUs and are considering an upgrade, these are what you … | Continue reading
Deep learning engineers and researchers spend too much time managing their infrastructure. Fortune 500 companies, startups, and universities are forced to build huge teams to administer the exponentially growing compute resources required to train modern deep learning models. Bil … | Continue reading
We’re excited to announce today that Lambda GPU Cloud is the first public cloud to offer instances with 2x & 4x RTX A6000 GPUs! We’ve been working hard behind the scenes to launch A6000s instances and provide a unique cloud GPU... | Continue reading
PyTorch & TensorFlow benchmarks of the Tesla A100 and V100 for convnets and language models. See both 32-bit and mix precision performance. | Continue reading
Instructions for getting TensorFlow and PyTorch running on NVIDIA's GeForce RTX 30 Series GPUs (Ampere), including RTX 3090, RTX 3080, and RTX 3070. | Continue reading
1, 2, or 4 NVIDIA® Quadro RTX™ 6000 GPUs on Lambda Cloud are a cost effective way of scaling your machine learning infrastructure. With the new RTX 6000 instances you can expect: a lower initial price of $1.25 / hr, 2x the performance per dollar vs a p3.8xlarge, and up-to-date dr … | Continue reading
It’s important to take into account available space, power, cooling, and relative performance into account when deciding which RTX 3000 series cards to include in your next deep learning workstation. | Continue reading
A balanced perspective on OpenAI's GPT-3. We summarize how the A.I. research community is thinking about Open AI's new language model. | Continue reading
This article will give a high-level summary about what is new in GPT-3, how to train and run inference, and the implications of its results. Before we dive into details, these are the take-home messages... | Continue reading
Last week, NVIDIA announced the new A100 GPU and DGX A100 server. At Lambda, we've taken a look at the Ampere architecture, the A100 GPU, and DGX A100 server to estimate their Deep Learning performance. We currently expect the A100 to have a ... | Continue reading
A TCO comparison between the Lambda Hyperplane 8 x V100 Server and the AWS p3dn.24xlarge instance. The Hyperplane cost comparison is very similar to that of the DGX-1. | Continue reading
Today, we’re releasing a new 8 NVIDIA® Tensor Core V100 GPU instance type for Lambda Cloud users. Priced at $12.00 / hr, our new instance provides over 2x more compute per dollar than comparable on-demand 8 GPU instances from other cloud providers. | Continue reading
This blog summarizes our GPU benchmark for training State of the Art (SOTA) deep learning models. We measure each GPU's performance by batch capacity as well as... | Continue reading
The new Lambda Hyperplane-16 makes it easy to scale out your deep learning infrastructure. The Hyperplane-16 incorporates 16 NVIDIA Tesla V100 SXM3 GPUs with NVLink and the Lambda Stack, which includes all major AI frameworks, to take the hassle out of training even the largest m … | Continue reading
This tutorial explains the basics of TensorFlow 2.0 with image classification as the example. 1) Data pipeline with dataset API. 2) Train, evaluation, save and restore models with Keras. 3) Multiple-GPU with distributed strategy. 4) Customized training with callbacks | Continue reading
Object Detection using Single Shot MultiBox DetectorThe problemThe task of object detection is to identify "what" objects are inside of an image and "where" they are. Given an input image, the algorithm outputs a list of objects, each associated with a class label and location (u … | Continue reading
Machine Learning Performance: Titan RTX vs. 2080 Ti vs. 1080 Ti vs. Titan Xp vs. Titan V vs. Tesla V100 At Lambda Labs, we're getting a lot of inquiries about the performance of our newly launched Lambda Dual - 2x Titan RTX Workstation. In this post, we benchmark the speed | Continue reading
What's the best GPU for Deep Learning in 2018? We benchmark the 2080 Ti vs the Titan V, V100, and 1080 Ti. | Continue reading
2080 Ti Deep Learning Benchmarks. What is the best Deep Learning GPU in 2018? 2080 Ti vs 1080 Ti: who wins? | Continue reading
Deep Learning Workstations with NVIDIA 1080 Ti | Titan Xp | Titan V | Tesla V100 | Multi-GPU Laptops, Workstations, and Servers. TensorFlow, Keras, PyTorch, and Caffe 2 preinstalled with Ubuntu 18.04 or Windows 10. In stock. Ships immediately. | Continue reading