Parameters are the key to machine learning algorithms. They’re the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. For example, OpenAI’s GPT-3 — one of the largest language models ever trained, at 175 billion parameters — can make primitive analogies, generate recipes, and even complete basic code.Read More
One longstanding goal of AI research is to allow robots to meaningfully interact with real-world environments. In a recent paper, researchers at Stanford and Facebook took a step toward this by extracting information related to actions like pushing or pulling objects with movable parts and using it to train an AI model. For example, given a drawer, their model can predict that applying a pulling force on the handle would open the drawer.Read More
In late 2019, researchers affiliated with Facebook, New York University (NYU), the University of Washington, and DeepMind proposed SuperGLUE, a new benchmark for AI designed to summarize research progress on a diverse set of language tasks. Building on the GLUE benchmark, which had been introduced one year prior, SuperGLUE includes a set of more difficult language understanding challenges, improved resources, and a publicly available leaderboard.Read More
The 2010s were huge for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become feasible because of the growing capacity to collect, store, and process large amounts of data. Today, deep learning is not just a topic of scientific research but also a key component of many everyday applications. But a decade’s worth of research and application has made it clear that in its current state, deep learning is not the final solution to solving the ever-elusive challenge of creating human-level AI.Read More
There’s no doubt about it – Artificial Intelligence has been a bit of a buzzword this year. Artificial intelligence has been established as the main driver of emerging technologies such as big data, robotics, and the IoT. So, what do the next 12 months look like for AI?
As a result of the global pandemic, consumer trends have changed significantly, which has resulted in some notable trends in the world of AI for 2021…
Last week, on the heels of DeepMind’s breakthrough in using AI to predict protein-folding came the news that the UK-based AI company is still costing its parent company Alphabet Inc hundreds of millions of dollars in losses each year. A tech company losing money is nothing new. The tech industry is replete with examples of companies who burned investor money long before becoming profitable. But DeepMind is not a normal company seeking to grab a share of a specific market. It is an AI research lab that has had to repurpose itself into a semi-commercial outfit to ensure its survival.Read More
The machine learning community, particularly in the fields of computer vision and language processing, has a data culture problem. That’s according to a survey of research into the community’s dataset collection and use practices published earlier this month.
What’s needed is a shift away from reliance on the large, poorly curated datasets used to train machine learning models. Instead, the study recommends a culture that cares for the people who are represented in datasets and respects their privacy and property rights.
Presented by SambaNova Systems
To stay on top of cutting-edge AI innovation, it’s time to upgrade your technology stack. In this VB Live event, you’ll learn how innovations in NLP, visual AI, recommendation models, and scientific computing are pushing computer architecture to the cutting edge.
Anomalo, a provider of data validation and documentation services for developers, today announced that it raised $5.95 million in venture capital. First Round Capital and Foundation Capital participated in the seed funding round, which is Anomalo’s first since its founding in 2018.Read More
IonQ today laid out its five-year roadmap for trapped ion quantum computers. The company plans to deploy rack-mounted modular quantum computers small enough to be networked together in a datacenter by 2023. That will result in a quantum advantage in building for machine learning, the company expects. IonQ then plans to achieve broad quantum advantage by 2025.Read More
During IBM’s virtual AI Summit this week, the company announced updates across its Watson family of products in the areas of language, explainability, and workplace automation. A new feature called Reading Comprehension surfaces answers from databases of enterprise documents in response to natural language questions, assigning a confidence score to each response.Read More
Nvidia AI researchers have introduced AI to generate talking heads for video conferences from a single 2D image capable of achieving a wide range of manipulation. This extends from rotating and moving a person’s head to motion transfer and video reconstruction.Read More
Zapata Computing has raised $38 million for its quantum computing enterprise software platform. The figure, which brings its total funding to over $64 million, will be put toward Zapata’s core mission: “Delivering quantum advantage for customers through real business use cases.”Read More
Researchers in China say they’ve created sarcasm detection AI that achieved state-of-the-art performance on a dataset drawn from Twitter. The AI uses multimodal learning that combines text and imagery since both are often needed to understand whether a person is being sarcastic. The researchers argue that sarcasm detection can…Read More
Abacus.AI (previously RealityEngines) is developing a service that automates machine learning model creation, deployment, and maintenance. Today the company raised $22 million in a series B round at an over $100 valuation.Read More
In October, Google announced it would let users search for songs by simply humming or whistling melodies, initially in English on iOS and in more than 20 languages on Android. At the time, the search giant only hinted at how the new Hum to Search feature worked. But in a blog post today, Google detailed the underlying systems that enable Google Search to find songs using only hummed renditions.Read More
AI researchers from Virginia Tech and Western University have concluded that an unequal distribution of compute power in academia is furthering inequality in the era of deep learning. They also point to the impact on academia of people leaving prestigious universities for high-paying industry jobs.Read More
Researchers at Google claim to have developed a machine learning model that can separate a sound source from noisy, single-channel audio based on only a short sample of the target source. In a paper, they say their SoundFilter system can be tuned to filter arbitrary sound sources, even those it hasn’t seen during training.
The researchers believe a noise-eliminating system like SoundFilter…
Google this morning launched the Document AI (DocAI) platform, a console for document processing hosted in Google Cloud, in preview. The company says it’s aimed at automating and validating documents by extracting data from documents and making them available to business apps and users. An IDC report revealed that document-related challenges account for a 21.3% productivity loss, and U.S. companies waste a collective $8 billion annually managing paperwork.Read More