Published by Pye.ai
OpenAI’s GPT-3 is well known in the machine learning world for their human like writing, where it is powering 300 applications, supporting GPT-3–powered search, conversation, text completion, and other advanced AI features through their API. Like all deep learning systems, GPT-3 looks for patterns in data.
In the beginning of 2020, the San Francisco-based AI lab OpenAI released their GPT-3 API for accessing the latest AI models via beta pricing model with use cases. Their mission is to ensure the creation and adoption of safe and beneficial artificial general intelligence (AGI) benefits all of humanity.
The lab was founded as a nonprofit in 2015, and later they created a for-profit offshoot in 2019 to help drive funding to build revenue model which brought in Microsoft to invest $1 billion in OpenAI.
To do this, the program has been trained on a huge corpus of text that it’s mined for statistical regularities. The first GPT, was released in 2018, that contained 117 million parameters, being weights of connections between the network’s nodes. The next version GPT-2 was released in 2019, that contained 1.5 billion parameters. Today, the latest version GPT-3, by comparison, the program has been trained using 175B parameters. It uses all of the English Wikipedia which is estimated to make up only just 0.6% of their training data.
Since the API release, it is currently generating an average of 4.5 billion words per day, and continue to scale in production traffic.
As reference, here is a shortlist of example OpenAI’s use cases in how developers and applications are using GPT-3:
- A question-based search engine. Google like but for questions and answers. Type a question and GPT-3 directs you to the relevant Wikipedia URL for the answer.
- A chatbot that lets you talk to historical figures. Being that GPT-3 has been trained on so many digitized books, it’s absorbed a fair amount of knowledge relevant to specific thinkers. Allow it to talk like the philosopher Bertrand Russell. Ask him to explain his views.
- Solve language and syntax puzzles. Might be less entertaining than other use cases but much more impressive to experts in the field. You can show GPT-3 certain linguistic patterns and it will complete any new prompts you show it correctly.
- Computer code generation based on text descriptions. Describe a design element or page layout of your choice in simple words and GPT-3 spits out the relevant code.
- Answer medical queries. A medical doctor could use GPT-3 to answer health care questions. The program not only gave the right answer but correctly explained the underlying medical issue.
- Style transfer for text. Input text written in a certain style and GPT-3 can change it to another.
- Compose guitar tabs. Guitar tabs are shared on the web using ASCII text files, so you can bet they comprise part of GPT-3’s training dataset.
- Write creative fiction. This is a wide-ranging area within GPT-3’s skillset but an incredibly impressive one. The best collection of the program’s literary samples comes from independent researcher and writers who’s collected a trove of GPT-3’s writing here. It ranges from a type of one-sentence pun known as a Tom Swifty to poetry in the style of Allen Ginsberg, T.S. Eliot, and Emily Dickinson to Navy SEAL copypasta.
- Autocomplete images, not just text. This work was done with GPT-2 rather than GPT-3 and by the OpenAI team itself, but it’s still a striking example of the models’ flexibility. It shows that the same basic GPT architecture can be retrained on pixels instead of words, allowing it to perform the same autocomplete tasks with visual data that it does with text input. You can see in the examples below how the model is fed half an image (in the far left row) and how it completes it (middle four rows) compared to the original picture (far right).
It is also important to know, GPT-3 itself isn’t always the most accurate tool. The good, the bad and the ugly. Where are a few ways that GPT-3 has gone wrong. In one instance, a GPT-3-powered medical chatbot encouraged suicide. Another, potential bad could be misleading spammers gaining access and use GPT-3 to create a large vast spread of misinformation.
Some are worried that OpenAI can be Alarming for The Society. On the surface, that doesn’t entirely align with the group’s mission to benefit “all of humanity.” The good news is the creators of GPT-3 decided to continue their research into the model’s biases and keep improving it.
Published by Pye.ai