The AI Infrastructure Alliance wants to improve interoperability between tools and frameworks made by small to medium-size AI startups. More than 20 AI startups have banded together to create the AI Infrastructure Alliance in order to build a software and hardware stack for machine learning and adopt common standards.
Read MoreAuthor: Khari Johnson
Katana Graph raises $28.5 million to handle unstructured data at scale
Katana Graph developed its technology at the University of Texas, Austin and has helped DARPA and businesses with unstructured data.
Read MoreGoogle’s new AI ethics lead calls for more ‘diplomatic’ conversation
Following months of inner conflict and opposition from Congress and thousands of Google employees, Google today announced that it will reorganize its AI ethics operations and place them in the hands of VP Marian Croak, who will lead a new responsible AI research and engineering center for expertise.
Read MoreOpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3
The makers of large language models like Google and OpenAI may not have long to set standards that sufficiently address their impact on society. Open source projects currently aiming to recreate GPT-3 include GPT-Neo, a project headed by EleutherAI. That’s according to a paper published last week by researchers from OpenAI and Stanford University.
Read MoreDatabricks raises $1 billion funding round at $28 billion valuation
Databricks today announced the close of a $1 billion funding round, bringing the company’s valuation to $28 billion after post-money valuation, a company spokesperson told VentureBeat. News of the funding round — the largest to-date for Databricks — was first reported in late January by Newcomer.
Read MoreConfidence, uncertainty, and trust in AI affect how humans make decisions
In 2019, as the Department of Defense considered adopting AI ethics principles, the Defense Innovation Unit held a series of meetings across the U.S. to gather opinions from experts and the public. At one such meeting in Silicon Valley, Stanford University professor Herb Lin argued that he was concerned about people trusting AI too easily and said any application of AI should include a confidence score indicating the algorithm’s degree of certainty.
Read MoreWhat algorithm auditing startups need to succeed
To provide clarity and avert potential harms, algorithms that impact human lives would ideally be reviewed by an independent body before they’re deployed, just as environmental impact reports must be approved before a construction project can begin. While no such legal requirement for AI exists in the U.S., a number of startups have been created to fill an algorithm auditing and risk assessment void.
Read MoreWorld Economic Forum launches global alliance to speed trustworthy AI adoption
The World Economic Forum is launching the Global AI Action Alliance today with over 100 organizations participating.
Read MoreGoogle targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Google has revoked Ethical AI team leader Margaret “Meg” Mitchell’s employee privileges and is currently investigating her activity, according to a statement provided by a company spokesperson. Should Google fire Mitchell, it will mean the company has effectively chosen to behead its own AI ethics team in under two months.
Read MoreIncoming White House science and technology leader on AI, diversity, and society
Technologies like artificial intelligence and human genome editing “reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue,” said Dr. Alondra Nelson today as part of the introduction of the Biden administration science team. On Friday, the Biden transition team appointed Nelson to the position of OSTP deputy director for science and society. Biden will be sworn in Wednesday to officially become the 46th president of the United States.
Read MoreWhat this bald eagle and neural network depiction have to do with future U.S. AI strategy
The White House Office of Science and Technology Policy (OSTP) today announced the launch of the National Artificial Intelligence Initiative Office, an organization that will coordinate and oversee national AI policy initiatives for the United States government. “The Office is charged with overseeing and implementing the United States national AI strategy and will serve as the central hub for federal coordination and collaboration in AI research and policymaking across the government, as well as with private sector, academia, and other stakeholders,” according to a White House statement.
Read MoreOpenAI debuts DALL-E for generating images from text
OpenAI today debuted two multimodal AI systems that combine computer vision and NLP: DALL-E, a system that generates images from text, and CLIP, a network trained on 400 million pairs of images and text. The photo above was generated by DALL-E from the text prompt “an illustration of a baby daikon radish in a tutu walking a dog.” DALL-E uses a 12-billion parameter version of GPT-3, and like GPT-3 is a Transformer language model. The name is meant to evoke the artist Salvador Dali and the robot WALL-E.
Read MoreHow machines are changing the way companies talk
Anyone who’s ever been on an earnings call knows company executives already tend to look at the world through rose-colored glasses, but a new study by economics and machine learning researchers says that’s getting worse, thanks to Machine Learning.
Read MoreAI research survey finds machine learning needs a culture change
The machine learning community, particularly in the fields of computer vision and language processing, has a data culture problem. That’s according to a survey of research into the community’s dataset collection and use practices published earlier this month.
What’s needed is a shift away from reliance on the large, poorly curated datasets used to train machine learning models. Instead, the study recommends a culture that cares for the people who are represented in datasets and respects their privacy and property rights.
An AI reporter’s favorite books of 2020
The older I get, the more I wish I could stop time so I could read more books. Books that earn my time and attention are those that promise to enrich me as a person and deepen my understanding of AI. This year, I read more than a dozen books, some published in recent months and others in years past, like The Curse of Bigness by Tim Wu, a great read for anyone interested in understanding antitrust, and the novel Parable of the Sower by Octavia E. Butler, one of my favorite books of all time.
Read MoreFrom whistleblower laws to unions: How Google’s AI ethics meltdown could shape policy
It’s been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models.
Read MoreOcéanIA treats climate change like a machine learning grand challenge
Self-driving cars. Artificial general intelligence. Beating a human in a game of chess. Grand challenges are tasks that can seem like moonshots that, if achieved, will move the entire machine learning discipline forward. Now a team of researchers with the recently established OcéanIA is treating the study of the ocean and climate change as a machine learning grand challenge. The four-year project that brings together more than a dozen AI researchers and scientists shared some initial plans this week.
Read MoreNvidia introduces AI for generating video conference talking heads from 2D images
Nvidia AI researchers have introduced AI to generate talking heads for video conferences from a single 2D image capable of achieving a wide range of manipulation. This extends from rotating and moving a person’s head to motion transfer and video reconstruction.
Read MoreAI researchers made a sarcasm detection model and it’s sooo impressive
Researchers in China say they’ve created sarcasm detection AI that achieved state-of-the-art performance on a dataset drawn from Twitter. The AI uses multimodal learning that combines text and imagery since both are often needed to understand whether a person is being sarcastic. The researchers argue that sarcasm detection can…
Read MoreAI research finds a ‘compute divide’ concentrates power and accelerates inequality in the era of deep learning
AI researchers from Virginia Tech and Western University have concluded that an unequal distribution of compute power in academia is furthering inequality in the era of deep learning. They also point to the impact on academia of people leaving prestigious universities for high-paying industry jobs.
Read More