The Meta Scalable Video Processor (MSVP) will support video on demand and live streaming, as well as generative AI and AR/VR content. | Continue reading
In 2020, we initiated the Meta Training and Inference Accelerator (MTIA) family of chips to support our evolving AI workloads, starting with an inference accelerator ASIC for deep learning recommendation models (DLRMs). | Continue reading
Meta AI has built CICERO, the first AI to achieve human-level performance in the popular strategy game Diplomacy. It's a breakthrough toward building AI that can use language to work with people to achieve strategic goals. | Continue reading
Sending an MP3 typically requires 128 kb/s of bandwidth. We can compress HiFi audio down to 12 kb/s, without sacrificing the quality. | Continue reading
We’re open-sourcing AITemplate, a unified inference system for both AMD and NVIDIA GPUs. It delivers performance improvements up to 12X on NVIDIA GPUs and 4X on AMD GPUs compared to eager-mode within Pytorch. | Continue reading
Make-A-Video builds on Meta AI’s recent research in generative technology and has the potential to open new opportunities for creators and artists. | Continue reading
We are releasing Implicitron, an extension of PyTorch3D that enables fast prototyping of 3D reconstruction and new-view synthesis methods based on rendering of implicit representations such as radiance fields, signed distance fields, and more. | Continue reading
We’re announcing that Meta AI has built and released BlenderBot 3, the first 175B parameter, publicly available chatbot complete with model weights, code, datasets, and model cards. We’ve deployed it in a live interactive conversational AI demo. | Continue reading
We’re showcasing an exploratory AI research concept called Make-A-Scene. This multimodal generative AI method puts creative control in the hands of people who use it by allowing them to describe and illustrate their vision through both text descriptions and freeform sketches. | Continue reading
Introducing AI-driven acoustic synthesis for AR and VR | Continue reading
Our work aims to break down language barriers across the world for everyone to understand and communicate with anyone—no matter what language they speak. | Continue reading
Meta AI is sharing new research on speech-to-speech translation (S2ST) that does not rely on text generation as an intermediate step. Our method outperforms previous approaches and is the first of its kind trained on real S2ST data for multiple language pairs. | Continue reading
We're sharing details on a new structure for Meta AI that will help us not only better pursue open, ground-breaking research, but also improve how we leverage AI in our products. | Continue reading
Meta AI is sharing OPT-175B, the first 175-billion-parameter language model to be made available to the broader AI research community. | Continue reading
Meta AI has developed a new, more flexible approach to teaching AI to cooperate and make their actions understandable to people: off-belief learning. Instead of using human labeled data, off-belief learning starts with the quest to search for a "grounded communication,” where the … | Continue reading
Explore Meta AI’s self-supervised learning demo for images | Continue reading
Meta AI Research is sharing details on Project CAIRaoke, a neural model for conversational AI. It can power the next generation of intelligent assistants, capable of more complex and more useful interactions with people. | Continue reading
Meta's Chief AI Scientist Yann LeCun sketches how the ability to learn “world models” – internal models of how the world works – may be the key to building human-level AI. | Continue reading
Sieh dir auf Facebook Beiträge, Fotos und vieles mehr an. | Continue reading