[Paper Summary] Facebook AI Introduces few-shot NAS (Neural Architecture Search)

Neural Architecture Search (NAS) has recently become an interesting area of deep learning research, offering promising results. One such approach, Vanilla NAS, uses search techniques to explore the search space and evaluate new architectures by training them from scratch. However, this may require thousands of GPU hours, leading to a very high computing cost for many research applications.

Read More

[Paper Summary] Researchers at Facebook AI, UC Berkeley, and Carnegie Mellon University Announced Rapid Motor Adaptation (RMA), An Artificial Intelligence (AI) Technique

To achieve success in the real world, walking robots must adapt to whatever surfaces they encounter, objects they carry, and conditions they are in, even if they’ve not been exposed to those conditions before. Moreover, to avoid falling and suffering damage, these adjustments must happen in fractions of a second.

Read More

[Paper Summary] Facebook AI Releases ‘BlenderBot 2.0’: An Open Source Chatbot that searches the internet to engage in Intelligent Conversations

BlenderBot-2.0

The GPT-3 and BlenderBot 1.0 models are extremely forgetful, but that’s not the worst of it! They’re also known to “hallucinate” knowledge when asked a question they can’t answer.

Read More

[Paper Summary] Researchers from Facebook AI Research and UIUC Propose ‘MaskFormer’, A Mask Classification Model

In recent years, semantic segmentation has become an important tool for computer vision. One type of the technique is called per-pixel classification and the goal is to partition images into regions with different categories using deep learning techniques such as Fully Convolutional Networks (FCNs). Mask classification is another alternative way that separates the image partitioning and classifying aspects of segmentation. Instead a single pixel, mask-based methods predict binary masks with each associated to those assigned to one specific class.

Read More

[Paper Summary] Stanford’s AI Researchers introduce QA-GNN Model that jointly reasons with Language Models and Knowledge Graphs

painter is Italy.

In this research paper, published at NAACL 2021, researchers found that combining both LMs and KGs makes it possible to answer questions more effectively. Existing systems that use LM and KGs tend to be noisy, and the interactions between QA context and KG are not modeled.

Read More

[Paper Summary] A new study from Cambridge, Twitter, UCLA propose CW Networks (CWNs) with better Expressive Power than GNNs

A recent study from a multi-institutional research team introduces CW Networks (CWNs), a message-passing mechanism that produces state-of-the-art outcomes across a variety of molecular datasets while delivering superior expressivity than commonly utilized graph neural networks (GNNs).

Read More

IBM Open Sources ‘CodeFlare’, a Machine Learning Framework that simplifies AI Workflows onto the Hybrid Cloud

Wireframe diagram

IBM has open-sourced CodeFlare, a machine learning framework that will allow developers to train their models more efficiently onto the hybrid cloud. This new framework is an exciting concept for those who are looking to simplify their workflow and shorten the time it takes. The idea behind this design is that when users have 10,000 work pipelines running, they wait up to 4 hours before receiving a result. While using this new framework, its implementation into these machines will require only 15 minutes.

Read More

[Paper Summary] Stanford AI Lab introduces AGQA: A new benchmark for Compositional, Spatio-Temporal Reasoning

Designing machines capable of exhibiting a compositional understanding of visual events has been an important goal of the computer vision community. Stanford AI has recently introduced the benchmark,’ Action Genome Question Answering’ (AGQA). It measures temporal, spatial, and compositional reasoning via nearly two hundred million question answering pairs. The questions are complex, compositional, and annotated to allow definitive tests that find the types of questions that the models can and cannot answer.

Read More

Google AI introduces MIAP (More Inclusive Annotations for People) Dataset in the Open Images Extended Collection for Computer Vision Research

Obtaining datasets that include thorough labeling of sensitive attributes is difficult, especially in the domain of computer vision. Recently, Google has introduced the More Inclusive Annotations for People (MIAP) dataset in their Open Images Extended collection.

Read More

Intel’s AI is helping NFL hopefuls to reach their full potential

Athlete

EXOS is piloting the use of Intel’s 3D Athlete Tracking (3DAT) technology to help the next generation of professional footballers reach their full potential. This year’s hopefuls risk feeling unprepared after coming off such a disruptive year and will need all the help they can get to achieve their goals. 3DAT is a computer vision […]

Read More

Former NHS surgeon creates AI ‘virtual patient’ for remote training

A former NHS surgeon has created an AI-powered “virtual patient” which helps to keep skills sharp during a time when most in-person training is on hold. Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals… Read more »
The post Former NHS surgeon creates AI ‘virtual patient’ for remote training appeared first on AI News.

Read More

The White House is set to boost AI funding by 30 percent

White House

A budget proposal from the White House would boost funding for AI by around 30 percent as the US aims to retain its technological supremacy. Countries around the world are vastly increasing their budgets for AI, and with good reason. Just look at Gartner’s Hype Cycle released yesterday to see how important the technology is…

Read More