Annual index finds AI is ‘industrializing’ but needs better metrics and testing

Geographical published AI strategies

The 2021 AI Index from Stanford University gathers data about AI research, startups, and changes to business and government policy.

Read More

We need Ethical Artificial Intelligence

Robotic Process Automation (RPA), Insights: The Productivity Step

The diverse use cases for AI raise ethical and moral questions about how technology is used in a fair and just manner. Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.

Read More

GPT-2 vs GPT-3: The OpenAI Showdown

Woman at a blackboard

GPT-2 vs GPT-3: The OpenAI Showdown. The Generative Pre-Trained Transformer (GPT) is an innovation in the Natural Language Processing (NLP) space developed by OpenAI. These models are known to be the most advanced of its kind and can even be dangerous in the wrong hands. It is an unsupervised generative model which means that it takes an input such as a sentence and tries to generate an appropriate response, and the data used for its training is not labelled.

Read More

OpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3

The makers of large language models like Google and OpenAI may not have long to set standards that sufficiently address their impact on society. Open source projects currently aiming to recreate GPT-3 include GPT-Neo, a project headed by EleutherAI. That’s according to a paper published last week by researchers from OpenAI and Stanford University.

Read More

Here’s where AI will advance in 2021

Artificial intelligence continues to advance at a rapid pace. Even in 2020, a year that did not lack compelling news, AI advances commanded mainstream attention on multiple occasions. OpenAI’s GPT-3, in particular, showed new and surprising ways we may soon be seeing AI penetrate daily life. Such rapid progress makes prediction about the future of AI somewhat difficult, but some areas do seem ripe for breakthroughs. Here are a few areas in AI that we feel particularly optimistic about in 2021.

Read More

Six times bigger than GPT-3: Inside Google’s TRILLION parameter switch transformer model

Six Times Bigger than GPT-3: Inside Google’s TRILLION Parameter Switch Transformer Model. OpenAI’s GPT-3 is, arguably , the most famous deep learning models created in the last few years. One of the things that impresses the most about GPT-3 is its size. In some context, GPT-3 is nothing but GPT-2 with a lot of more parameters. With 175 billion parameters, GPT-3 was about four times bigger than its largest predecessor.

Read More

OpenAI Extends GPT-3 to combine NLP with Images

GPT3 and NLP

A pair of neural networks unleashed by GPT-3 developer OpenAI use text in the form of image captions as a way of generating images, a predictive approach that developers said will help AI systems better understand language by providing context for deciphering the meaning of words, phrases and sentences.

Read More

OpenAI GPT-3 wrote this article about Webpack

This article was written by OpenAI GPT-3 model, using “davinci” engine, I gave it a small input and this is the final output. Can you tell the difference between this and a human-made article? Obviously the formatting can be improved, and some part are missing but count that, my input was just “webpack is a build tool” and nothing more.

Read More

AI research survey finds machine learning needs a culture change

The machine learning community, particularly in the fields of computer vision and language processing, has a data culture problem. That’s according to a survey of research into the community’s dataset collection and use practices published earlier this month.
What’s needed is a shift away from reliance on the large, poorly curated datasets used to train machine learning models. Instead, the study recommends a culture that cares for the people who are represented in datasets and respects their privacy and property rights.

Read More

Honey I shrunk the Model: Why big Machine Learning models must go small

Bigger is not always better for machine learning. Yet, deep learning models and the datasets on which they’re trained keep expanding, as researchers race to outdo one another while chasing state-of-the-art benchmarks. However groundbreaking they are, the consequences of bigger models are severe for both budgets and the environment alike.

Read More

GPT-3 & Beyond: 10 NLP Research Papers you should read

NLP research advances in 2020 are still dominated by large pre-trained language models, and specifically transformers. There were many interesting updates introduced this year that have made transformer architecture more efficient and applicable to long documents.

Read More

The latest breakthroughs in Conversational AI Agents

2020 is the breakthrough year for conversational agents. First, Google’s chatbot Meena and Facebook’s chatbot Blender demonstrated that dialog agents can achieve close to human-level performance in certain tasks. Then, OpenAI’s GPT-3 model made lots of people wonder whether Artificial General Intelligence (AGI) is already here. While we are still a long way off true AGI, conversations with GPT-3 based chatbots can be very entertaining.

Read More

Microsoft is granted exclusive rights to use OpenAI’s GPT-3

Microsoft and OpenAI’s close relationship has taken another leap forward with the former gaining exclusive GPT-3 access. GPT-3 has been the talk of the AI town in recent months. OpenAI’s innovation can help to create convincing articles and the company once deemed it too dangerous to release in a world where misinformation and fake news…

Read More

Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article

AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3. GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

Read More

Researchers find cutting-edge language models fall short in basic reasoning

Even sophisticated language models such as OpenAI’s GPT-3 struggle with socially important topics like morality, history, and law. That’s the top-line finding from a new paper coauthored by Columbia, University of Chicago, and University of California, Berkeley researchers that proposes a 57-task test to measure models’ ability to reason.

Read More
1 2