Posts Tagged ‘gpt3’

A.I. Is Mastering Language. Should We Trust What It Says? – The New York Times

April 24, 2022

I don’t trust it, but, at the same time, I am willing to let it help me write. Also, suspect this will make grading term papers quite a challenge in the future.
https://www.nytimes.com/2022/04/15/magazine/ai-language.html?smid=tw-share

OpenAI API

March 26, 2022

https://openai.com/api/

interesting stuff to use “playground”

Compose AI: Automate Your Writing

March 26, 2022

https://www.compose.ai/

Teaching computers to read and speak chemistry

May 7, 2021

https://cen.acs.org/physical-chemistry/computational-chemistry/Teaching-computers-read-speak-chemistry/99/i5

Can a Machine Learn to Write for The New Yorker? | The New Yorker

May 7, 2021

https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker

100+ GPT-3 Examples, Demos, Apps, Showcase, and NLP Use-cases | GPT-3 Demo

April 17, 2021

https://gpt3demo.com/

https://betterwriter.ai/
https://www.swifterhq.com/tools/ai-email-generator
https://gpt3demo.com/apps/swifterhq-ai-email-generator

Robo-writers: the rise and risks of language-generating AI

April 17, 2021

https://www.nature.com/articles/d41586-021-00530-0

GPT3

QT:{{”
A neural network’s size — and therefore its power — is roughly measured by how many parameters it has. These numbers define the strengths of the connections between neurons. More neurons and more connections means more parameters; GPT-3 has 175 billion. The next-largest language model of its kind has 17 billion (see ‘Larger language models’). (In January, Google released a model with 1.6 trillion parameters, but it’s a ‘sparse’ model, meaning each parameter does less work. In terms of performance, this is equivalent to a ‘dense’ model that has between 10 billion and 100 billion parameters, says William Fedus, a researcher at the University of Montreal, Canada, and Google.)
“}}