[Paper Summary] Facebook AI Introduces few-shot NAS (Neural Architecture Search)

Neural Architecture Search (NAS) has recently become an interesting area of deep learning research, offering promising results. One such approach, Vanilla NAS, uses search techniques to explore the search space and evaluate new architectures by training them from scratch. However, this may require thousands of GPU hours, leading to a very high computing cost for many research applications.

Read More

[Paper Summary] DeepMind introduces it’s Supermodel AI ‘Perceiver’: a Neural Network Model that could process all types of input

DeepMind recently released a state-of-the-art deep learning model called Perceiver via a recent paper. It adapts the Transformer to let it consume all the types of input ranging from audio to images and perform different tasks, such as image recognition, for which particular kinds of neural networks are generally developed. It works very similarly to how the human brain perceives multi-modal input.

Read More

[Paper Summary] Skoltech Researchers present a Machine Learning Framework involving Convolutional Neural Networks

Skoltech researchers and their partners in the U.S. have created a neural network that can help tweak semiconductor crystals to achieve superior properties for electronics. This is an exciting new direction of development with limitless possibilities for next-generation chips and solar cells. This study is published as a paper in the journal npj Computational Materials.

Read More

[Paper Summary] Researchers at Facebook AI, UC Berkeley, and Carnegie Mellon University Announced Rapid Motor Adaptation (RMA), An Artificial Intelligence (AI) Technique

To achieve success in the real world, walking robots must adapt to whatever surfaces they encounter, objects they carry, and conditions they are in, even if they’ve not been exposed to those conditions before. Moreover, to avoid falling and suffering damage, these adjustments must happen in fractions of a second.

Read More

[Paper Summary] Facebook AI Releases ‘BlenderBot 2.0’: An Open Source Chatbot that searches the internet to engage in Intelligent Conversations

BlenderBot-2.0

The GPT-3 and BlenderBot 1.0 models are extremely forgetful, but that’s not the worst of it! They’re also known to “hallucinate” knowledge when asked a question they can’t answer.

Read More

[Paper Summary] Researchers from Facebook AI Research and UIUC Propose ‘MaskFormer’, A Mask Classification Model

In recent years, semantic segmentation has become an important tool for computer vision. One type of the technique is called per-pixel classification and the goal is to partition images into regions with different categories using deep learning techniques such as Fully Convolutional Networks (FCNs). Mask classification is another alternative way that separates the image partitioning and classifying aspects of segmentation. Instead a single pixel, mask-based methods predict binary masks with each associated to those assigned to one specific class.

Read More

[Paper Summary] Stanford’s AI Researchers introduce QA-GNN Model that jointly reasons with Language Models and Knowledge Graphs

painter is Italy.

In this research paper, published at NAACL 2021, researchers found that combining both LMs and KGs makes it possible to answer questions more effectively. Existing systems that use LM and KGs tend to be noisy, and the interactions between QA context and KG are not modeled.

Read More

[Paper Summary] A new study from Cambridge, Twitter, UCLA propose CW Networks (CWNs) with better Expressive Power than GNNs

A recent study from a multi-institutional research team introduces CW Networks (CWNs), a message-passing mechanism that produces state-of-the-art outcomes across a variety of molecular datasets while delivering superior expressivity than commonly utilized graph neural networks (GNNs).

Read More

[Paper Summary] Researchers from University of Sydney and Japan’s NIMS have discovered a way to create Artificial Networks of Nanowires

A team of researchers from the University of Sydney and Japan’s National Institute for Material Science have demonstrated that they can utilize a random network of nanowires to mimic the structure as well as the dynamics of the brain to solve simple tasks involving processing.

Read More

IBM Open Sources ‘CodeFlare’, a Machine Learning Framework that simplifies AI Workflows onto the Hybrid Cloud

Wireframe diagram

IBM has open-sourced CodeFlare, a machine learning framework that will allow developers to train their models more efficiently onto the hybrid cloud. This new framework is an exciting concept for those who are looking to simplify their workflow and shorten the time it takes. The idea behind this design is that when users have 10,000 work pipelines running, they wait up to 4 hours before receiving a result. While using this new framework, its implementation into these machines will require only 15 minutes.

Read More

[Paper Summary] Stanford AI Lab introduces AGQA: A new benchmark for Compositional, Spatio-Temporal Reasoning

Designing machines capable of exhibiting a compositional understanding of visual events has been an important goal of the computer vision community. Stanford AI has recently introduced the benchmark,’ Action Genome Question Answering’ (AGQA). It measures temporal, spatial, and compositional reasoning via nearly two hundred million question answering pairs. The questions are complex, compositional, and annotated to allow definitive tests that find the types of questions that the models can and cannot answer.

Read More

EBRAINS Researchers introduce a Robot whose internal workings Mimic a Human Brain (with Video)

Ebrains

The human brain contains between 100 million and 100 billion neurons that process information from the senses and body and send messages back to the body. Thus, human intelligence is one of the most intriguing concepts many AI scientists are looking to replicate. A team of researchers at the new EBRAINS research infrastructure are building robots whose internal workings mimic the brain that would bring new concepts on the neural mechanisms.

Read More

Google AI introduces MIAP (More Inclusive Annotations for People) Dataset in the Open Images Extended Collection for Computer Vision Research

Obtaining datasets that include thorough labeling of sensitive attributes is difficult, especially in the domain of computer vision. Recently, Google has introduced the More Inclusive Annotations for People (MIAP) dataset in their Open Images Extended collection.

Read More