Imagine you’re in an airport, searching for your departure gate. Humans have an excellent ability to extract relevant information from unfamiliar environments to guide us toward a specific goal. This practical conscious processing of information, aka consciousness in the first sense (C1), is achieved by focusing on a small subset of relevant variables from an environment — in the airport scenario we would ignore souvenir shops and so on and focus only on gate-number signage — and it enables us to generalize and adapt well to new situations and to learn new skills or concepts from only limited examples.Read More
In a new paper, a team from the IEEE (Institute of Electrical and Electronics Engineers) provides a comprehensive overview of the bottom-up and top-down design approaches toward neuromorphic intelligence, highlighting the different levels of granularity present in existing silicon implementations and assessing the benefits of the different circuit design styles of neural processing systems.Read More
A new Google Research study has proposed a unified, efficient and modular approach for implicit differentiation of optimization problems that combines the benefits of implicit differentiation and automatic differentiation (autodiff). The researchers say solvers equipped with implicit differentiation set up by the proposed framework can make the autodiff process more efficient for end-users.Read More
A research team from Facebook AI recently published a large-scale study on unsupervised spatiotemporal representation learning from videos, aiming to compare the various meta-methodologies on common ground. With a unified perspective on four current image-based frameworks (MoCo, SimCLR, BYOL, SwAV), the team identifies a simple objective they say can easily generalize all these methodologies to space-time.Read More
The past several years have seen the rapid development of new hardware for training and running convolutional neural networks. Highly-parallel hardware accelerators such as GPUs and TPUs have enabled machine learning researchers to design and train more complex and accurate neural networks that can be employed in more complex real-life applications.Read More
A team from Facebook and Google has proposed LazyTensor — a technique for targeting domain-specific compilers without sacrificing define-by-run ergonomics.
The post Facebook & Google’s LazyTensor Enables Expressive Domain-Specific Compilers first appeared on Synced.
A recent study by the Google Brain Team proposes a new way of programming automated machine learning (AutoML) based on symbolic programming. The researchers have also introduced PyGlove, a Python library that demonstrates the new paradigm’s promising results.Read More
AI has become increasingly capable of generating impressive artworks across a wide range of styles and forms, from abstract painting to prose writing, film scores, and even operas. Many researchers spent much of 2020 at home, where, apparently, many explored AI’s creative potential. As part of our year-end series, Synced highlights 10 AI-powered art projects that inspired and entertained us in 2020.Read More
Synced has selected 10 AI-related podcasts for readers to check out over the holiday season.Read More
Much of the world may be on hold, but AI research is still booming. The volume of peer-reviewed AI papers has grown by more than 300 percent over the last two decades, and attendance at AI conferences continues to increase significantly, according to the Stanford AI Index.Read More