GPT-2 vs GPT-3: The OpenAI Showdown

Woman at a blackboard

GPT-2 vs GPT-3: The OpenAI Showdown. The Generative Pre-Trained Transformer (GPT) is an innovation in the Natural Language Processing (NLP) space developed by OpenAI. These models are known to be the most advanced of its kind and can even be dangerous in the wrong hands. It is an unsupervised generative model which means that it takes an input such as a sentence and tries to generate an appropriate response, and the data used for its training is not labelled.

Read More

OpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3

The makers of large language models like Google and OpenAI may not have long to set standards that sufficiently address their impact on society. Open source projects currently aiming to recreate GPT-3 include GPT-Neo, a project headed by EleutherAI. That’s according to a paper published last week by researchers from OpenAI and Stanford University.

Read More

Here’s where AI will advance in 2021

Artificial intelligence continues to advance at a rapid pace. Even in 2020, a year that did not lack compelling news, AI advances commanded mainstream attention on multiple occasions. OpenAI’s GPT-3, in particular, showed new and surprising ways we may soon be seeing AI penetrate daily life. Such rapid progress makes prediction about the future of AI somewhat difficult, but some areas do seem ripe for breakthroughs. Here are a few areas in AI that we feel particularly optimistic about in 2021.

Read More

Artificial intelligence researchers rank the top A.I. labs worldwide

Demis Hassabis

There are some obvious contenders when it comes to commercial AI labs. U.S. Big Tech — Google, Facebook, Amazon, Apple and Microsoft — have all set up dedicated AI labs over the last decade. There’s also DeepMind, which is owned by Google parent company Alphabet, and OpenAI, which counts Elon Musk as a founding investor.

Read More

OpenAI’s text-to-image engine, DALL-E, is a powerful visual idea generator

Once upon a time in Silicon Valley, engineers at the various electronics firms would tinker at their benches and create new inventions. This tinkering was done, at least in part, to show to the engineer at the next bench so they could both appreciate the ingenuity and inspire others. Some of this work eventually made it into products — but much of it did not. This inefficiency that existed until the late 1980s was largely supplanted (by the bean counters first, and then marketing staffs), and product development shifted to focus instead on perceived customer desires.

Read More

The power of AI

Artificial Intelligence is one of the trending topics in the field of Computer Science it has been around from the early days of computing since1943 but it’s taken to become a different beast now as its a term that has been used to make things sound cool and trendy like IoT was a few years ago but no one really understands the true power of AI.

Read More

OpenAI Extends GPT-3 to combine NLP with Images

GPT3 and NLP

A pair of neural networks unleashed by GPT-3 developer OpenAI use text in the form of image captions as a way of generating images, a predictive approach that developers said will help AI systems better understand language by providing context for deciphering the meaning of words, phrases and sentences.

Read More

OpenAI debuts DALL-E for generating images from text

OpenAI today debuted two multimodal AI systems that combine computer vision and NLP: DALL-E, a system that generates images from text, and CLIP, a network trained on 400 million pairs of images and text. The photo above was generated by DALL-E from the text prompt “an illustration of a baby daikon radish in a tutu walking a dog.” DALL-E uses a 12-billion parameter version of GPT-3, and like GPT-3 is a Transformer language model. The name is meant to evoke the artist Salvador Dali and the robot WALL-E.

Read More

Using Reinforcement Learning to build a Self-Learning grasping Robot

In this post, I will explain my experience over the course of a year of working with Reinforcement Learning (RL) on autonomous robotics manipulation. It is always hard to start a big project which requires many moving parts. It was undoubtedly the same in this project. I want to pass the knowledge I gathered through this process to help others overcome the initial inertia.

Read More

Microsoft is granted exclusive rights to use OpenAI’s GPT-3

Microsoft and OpenAI’s close relationship has taken another leap forward with the former gaining exclusive GPT-3 access. GPT-3 has been the talk of the AI town in recent months. OpenAI’s innovation can help to create convincing articles and the company once deemed it too dangerous to release in a world where misinformation and fake news…

Read More

Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article

AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3. GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

Read More

Researchers find cutting-edge language models fall short in basic reasoning

Even sophisticated language models such as OpenAI’s GPT-3 struggle with socially important topics like morality, history, and law. That’s the top-line finding from a new paper coauthored by Columbia, University of Chicago, and University of California, Berkeley researchers that proposes a 57-task test to measure models’ ability to reason.

Read More