Foundation Models

In August 2021 Stanford announced establishing the Center for Research on Foundation Models (CRFM) as part of the Stanford Institute for… | Continue reading


@blog.inten.to | 2 years ago

Hardware for Deep Learning. Part 4: ASIC

This is a part about ASICs from the “Hardware for Deep Learning” series. The content of the series is here. | Continue reading


@blog.inten.to | 3 years ago

GPT-3: TL;DR and more

OpenAI just published a paper “Language Models are Few-Shot Learners” presenting a recent upgrade of their well-known GPT-2 model — the… | Continue reading


@blog.inten.to | 3 years ago

Hardware for Deep Learning. Part 2: CPU

This is a part on CPUs in a series “Hardware for Deep Learning”. | Continue reading


@blog.inten.to | 4 years ago

Speeding Up Bert

How to make BERT models faster | Continue reading


@blog.inten.to | 4 years ago

RoBERTa: A Robustly Optimized Bert Pretraining Approach

Hardly a month comes without a new language model announces to surpass the good old (oh my god, it’s still 9-months old) BERT in one… | Continue reading


@blog.inten.to | 4 years ago

Web app to access multiple Machine Translation engines (all of them)

Human-Friendly Tools for Machine Intelligence | Continue reading


@blog.inten.to | 5 years ago

Hardware for Deep Learning. Part 3: GPU (upd. Sep 2018, Including Turing Arch)

This is a part on GPUs in a series “Hardware for Deep Learning”. It is the most content-heavy part, mostly because GPUs are the current… | Continue reading


@blog.inten.to | 5 years ago