[Paper] Yoshua Bengio team designs consciousness-inspired Planning Agent for Model-Based RL

Imagine you’re in an airport, searching for your departure gate. Humans have an excellent ability to extract relevant information from unfamiliar environments to guide us toward a specific goal. This practical conscious processing of information, aka consciousness in the first sense (C1), is achieved by focusing on a small subset of relevant variables from an environment — in the airport scenario we would ignore souvenir shops and so on and focus only on gate-number signage — and it enables us to generalize and adapt well to new situations and to learn new skills or concepts from only limited examples.

Read More

[Paper] IEEE Publishes comprehensive Survey of bottom-up and top-down Neural Processing System Design

In a new paper, a team from the IEEE (Institute of Electrical and Electronics Engineers) provides a comprehensive overview of the bottom-up and top-down design approaches toward neuromorphic intelligence, highlighting the different levels of granularity present in existing silicon implementations and assessing the benefits of the different circuit design styles of neural processing systems.

Read More

Google proposes efficient and modular Implicit Differentiation for Optimization Problems

A new Google Research study has proposed a unified, efficient and modular approach for implicit differentiation of optimization problems that combines the benefits of implicit differentiation and automatic differentiation (autodiff). The researchers say solvers equipped with implicit differentiation set up by the proposed framework can make the autodiff process more efficient for end-users.

Read More

Facebook AI conducts large-scale study on Unsupervised Spatiotemporal Representation Learning

A research team from Facebook AI recently published a large-scale study on unsupervised spatiotemporal representation learning from videos, aiming to compare the various meta-methodologies on common ground. With a unified perspective on four current image-based frameworks (MoCo, SimCLR, BYOL, SwAV), the team identifies a simple objective they say can easily generalize all these methodologies to space-time.

Read More

Model Scaling that’s both accurate and fast: Facebook AI…

The past several years have seen the rapid development of new hardware for training and running convolutional neural networks. Highly-parallel hardware accelerators such as GPUs and TPUs have enabled machine learning researchers to design and train more complex and accurate neural networks that can be employed in more complex real-life applications.

Read More

Facebook & Google’s LazyTensor enables expressive domain-specific Compilers

A team from Facebook and Google has proposed LazyTensor — a technique for targeting domain-specific compilers without sacrificing define-by-run ergonomics.
The post Facebook & Google’s LazyTensor Enables Expressive Domain-Specific Compilers first appeared on Synced.

Read More

10 AI-powered art projects

AI has become increasingly capable of generating impressive artworks across a wide range of styles and forms, from abstract painting to prose writing, film scores, and even operas. Many researchers spent much of 2020 at home, where, apparently, many explored AI’s creative potential. As part of our year-end series, Synced highlights 10 AI-powered art projects that inspired and entertained us in 2020.

Read More