[Paper Summary] Scientists have created a new Tool ‘Storywrangler’ that can explore billions of Social Media messages in order to Predict Future Conflicts and Turmoil

Scientists have recently invented an instrument to divulge deeper into the billions of posts made on Twitter since 2008. The new tool is capable of providing an unprecedented, minute-by-minute view of popularity. The research was carried out by a team at the University of Vermont. The team calls the instrument the Storywrangler. 

Read More

[Paper Summary] Facebook AI Introduces few-shot NAS (Neural Architecture Search)

Neural Architecture Search (NAS) has recently become an interesting area of deep learning research, offering promising results. One such approach, Vanilla NAS, uses search techniques to explore the search space and evaluate new architectures by training them from scratch. However, this may require thousands of GPU hours, leading to a very high computing cost for many research applications.

Read More

[Paper Summary] DeepMind introduces it’s Supermodel AI ‘Perceiver’: a Neural Network Model that could process all types of input

DeepMind recently released a state-of-the-art deep learning model called Perceiver via a recent paper. It adapts the Transformer to let it consume all the types of input ranging from audio to images and perform different tasks, such as image recognition, for which particular kinds of neural networks are generally developed. It works very similarly to how the human brain perceives multi-modal input.

Read More

[Paper Summary] Researchers at Facebook AI, UC Berkeley, and Carnegie Mellon University Announced Rapid Motor Adaptation (RMA), An Artificial Intelligence (AI) Technique

To achieve success in the real world, walking robots must adapt to whatever surfaces they encounter, objects they carry, and conditions they are in, even if they’ve not been exposed to those conditions before. Moreover, to avoid falling and suffering damage, these adjustments must happen in fractions of a second.

Read More

[Paper Summary] Researchers from University of Sydney and Japan’s NIMS have discovered a way to create Artificial Networks of Nanowires

A team of researchers from the University of Sydney and Japan’s National Institute for Material Science have demonstrated that they can utilize a random network of nanowires to mimic the structure as well as the dynamics of the brain to solve simple tasks involving processing.

Read More

[Paper Summary] Stanford AI Lab introduces AGQA: A new benchmark for Compositional, Spatio-Temporal Reasoning

Designing machines capable of exhibiting a compositional understanding of visual events has been an important goal of the computer vision community. Stanford AI has recently introduced the benchmark,’ Action Genome Question Answering’ (AGQA). It measures temporal, spatial, and compositional reasoning via nearly two hundred million question answering pairs. The questions are complex, compositional, and annotated to allow definitive tests that find the types of questions that the models can and cannot answer.

Read More

Google AI introduces MIAP (More Inclusive Annotations for People) Dataset in the Open Images Extended Collection for Computer Vision Research

Obtaining datasets that include thorough labeling of sensitive attributes is difficult, especially in the domain of computer vision. Recently, Google has introduced the More Inclusive Annotations for People (MIAP) dataset in their Open Images Extended collection.

Read More

[Paper] Facebook AI releases Dynaboard: A New Evaluation platform for NLP Models

Last year, Facebook AI released Dynabench, a platform that radically rethinks benchmarking in AI, starting with natural language processing (NLP) models. Going forward, they have now announced a new evaluation-as-a-service platform for comprehensive, standardized evaluations of NLP models called Dynaboard. Dynaboard can perform apples-to-apples comparisons dynamically without common issues from bugs in evaluation code, inconsistencies in filtering test data, backward compatibility, accessibility, and several other reproducibility issues.

Read More

[Paper Summary] Researchers from NVIDIA, Stanford University and Microsoft Research propose Efficient Trillion-Parameter Language Model Training on GPU Clusters

In a paper by NVIDIA, Stanford University, and Microsoft Research, a research team has proposed a new parallelization schedule that improves throughput by more than 10 percent with a comparable memory footprint. The paper demonstrated that such strategies could be composed to achieve high aggregate throughput when training large models with nearly a trillion parameters. 

Read More

[Paper Summary] Researchers at ETH Zurich and UC Berkeley Propose Deep Reward Learning by Simulating The Past (Deep RLSP)

In a new research paper, a research team from ETH Zurich and UC Berkeley have proposed ‘Deep Reward Learning by Simulating the Past’ (Deep RLSP). This new algorithm represents rewards directly as a linear combination of features learned through self-supervised representation learning. It enables agents to simulate human actions “backward in time to infer what they must have done.

Read More

Researchers from DeepMind and Alberta University propose policy-guided Heuristic search Algorithm

DeepMind’s AlphaGo and its successors previously demonstrated that the policy and heuristic function is formulated upon the PUCT (Polynomial Upper Confidence Trees) search algorithm. This algorithm can be quite effective for guiding search in adversarial games. However, PUCT is computationally inefficient and lacks guarantees on its search effort. Though other methods such as LevinTS provide guarantees on search steps, they do not use a heuristic function.

Read More

Google AI introduces a new system for Open-Domain Long-Form Question Answering (LFQA)

Attention

Open-domain long-on answering (LFQA) form questions a fundamental challenge in natural language processing (NLP) that involves retrieving documents relevant to a given query and using them to generate a detailed paragraph-length answer. 

Read More

Google AI Introduces ‘Model Search’: An Open Source Platform For Finding Optimal Machine learning (ML) Models

Model search

Google AI has announced the release of Model Search, a platform that will help researchers develop machine learning (ML) models automatically and efficiently. Model Search isn’t domain-specific, flexible, and well equipped to find the appropriate architecture that best fits a given dataset and problem. At the same time, it minimizes the coding time, effort, and resources. […]

Read More