Imagine you’re in an airport, searching for your departure gate. Humans have an excellent ability to extract relevant information from unfamiliar environments to guide us toward a specific goal. This practical conscious processing of information, aka consciousness in the first sense (C1), is achieved by focusing on a small subset of relevant variables from an environment — in the airport scenario we would ignore souvenir shops and so on and focus only on gate-number signage — and it enables us to generalize and adapt well to new situations and to learn new skills or concepts from only limited examples.Read More
In a new paper, a team from the IEEE (Institute of Electrical and Electronics Engineers) provides a comprehensive overview of the bottom-up and top-down design approaches toward neuromorphic intelligence, highlighting the different levels of granularity present in existing silicon implementations and assessing the benefits of the different circuit design styles of neural processing systems.Read More
A new Google Research study has proposed a unified, efficient and modular approach for implicit differentiation of optimization problems that combines the benefits of implicit differentiation and automatic differentiation (autodiff). The researchers say solvers equipped with implicit differentiation set up by the proposed framework can make the autodiff process more efficient for end-users.Read More
Didi Autonomous Driving is to complete a USD 300 million new round of financing, with USD 200 million from the Guangzhou Automobile Group and its capital investment arm. Since Didi’s autonomous driving unit split from Didi Chuxing back in 2019, the company has raised a total of more than USD 1.1 billion. A person familiar with the matter said that after this round of financing, Didi’s self-driving valuation will exceed startup Pony.ai.Read More
A research team from Facebook AI recently published a large-scale study on unsupervised spatiotemporal representation learning from videos, aiming to compare the various meta-methodologies on common ground. With a unified perspective on four current image-based frameworks (MoCo, SimCLR, BYOL, SwAV), the team identifies a simple objective they say can easily generalize all these methodologies to space-time.Read More
The past several years have seen the rapid development of new hardware for training and running convolutional neural networks. Highly-parallel hardware accelerators such as GPUs and TPUs have enabled machine learning researchers to design and train more complex and accurate neural networks that can be employed in more complex real-life applications.Read More
A team from Facebook and Google has proposed LazyTensor — a technique for targeting domain-specific compilers without sacrificing define-by-run ergonomics.
The post Facebook & Google’s LazyTensor Enables Expressive Domain-Specific Compilers first appeared on Synced.
On February 16, Goldman Sachs announced that it is launching an automated wealth management platform to invest client funds in a portfolio of stocks and bonds.Read More
The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) kicked off today as a virtual conference. The organizing committee announced the Best Paper Awards and Runners Up during this morning’s opening ceremony. Three papers received Best Paper Awards and three were recognized as Runners Up. The total of 9,034 submissions to AAAI 2021 marked another record high, surpassing last year’s 8800. Submissions from China (3,319) almost doubled the number of papers from the United States (1,822). Out of 7,911 papers that went to review, a total of 1,692 papers made it. This year’s acceptance rate was 21 percent, slightly higher than last year’s 20.6 percent.Read More
A recent study by the Google Brain Team proposes a new way of programming automated machine learning (AutoML) based on symbolic programming. The researchers have also introduced PyGlove, a Python library that demonstrates the new paradigm’s promising results.Read More
AI firm 4Paradigm announced USD 700 million series D investments from Boyu Capital, Primavera Capital and Hopu Investments. Founded in 2014, 4Paradigm began with delivering full-stack AI solutions to banking clients.Read More
AI has become increasingly capable of generating impressive artworks across a wide range of styles and forms, from abstract painting to prose writing, film scores, and even operas. Many researchers spent much of 2020 at home, where, apparently, many explored AI’s creative potential. As part of our year-end series, Synced highlights 10 AI-powered art projects that inspired and entertained us in 2020.Read More
Synced has selected 10 AI-related podcasts for readers to check out over the holiday season.Read More
Much of the world may be on hold, but AI research is still booming. The volume of peer-reviewed AI papers has grown by more than 300 percent over the last two decades, and attendance at AI conferences continues to increase significantly, according to the Stanford AI Index.Read More