1 reply [Last post]
puxatudo's picture
User offline. Last seen 7 weeks 14 hours ago. Offline
Joined: 04/07/2014
Groups: Go Science

Have you guys hear about GPT-3?

If you haven't, here it is:

GPT-3 is an unsupervised "transformer language model" and the successor to GPT-2.
OpenAI (the artificial intelligence research laboratory behind GPT-3. Yeah, one of the founders was, obviously, Elon Musk) stated that full version of GPT-3 contains 175 billion parameters.
Just to give you an idea of the proportion of the beast.
Check this comparison:

OpenAI's "GPT-2" - 1.5 billion parameters
Google T5 - 11.0 billion parameters
Turing-NLG - 17.0 billion parameters
GPT-3 - 175.0 billion parameters

Check the some of the GPT-3 applications here: https://gpt3examples.com/#examples
There are tons of Youtube videos explaining GPT-3, its uses, implications, etc...

I was wondering: what if the AI was trained in LUA, then we could just use English to make recipes.
I know, I'm kind of dreaming here... but hey, who knows?

Joined: 06/20/2019
Groups: Go Science
I think there should be restrictions on that AI.

I think GPT-3 seems very impressive to me, after looking at the website you showed.

I am interested in your wondering too, but if your wondering is the case, then I think there must be some sort of thoroughly integrated restrictions securely set into the AI that support the general, more universal morals of human culture, especially through, but not limited to functionality and efficiency, such as but not limited to not allowing the AI to not follow or go around the Foldit Community Rules and Foldit Terms of Service directly or indirectly in any way whatsoever through all kinds of loopholes, unmentioned or mentioned situations, all kinds of time travel, all kinds of portals, and/or flaws in the Foldit Community Rules and Foldit Terms Of Service. The idea that someone may use the AI to do/generate instructions to do bad things that does not follow the Foldit Community Rules and Foldit Terms of Service should be taken into consideration and made use of to fully and completely prevent the situation as described by that idea.

My main point that I am mainly going for is that the first thing I would think the AI, or other more powerful AIs, would first have to do once they are created, is to generate instructions that follow the integrated restrictions as described in this forum post to make better and more flawless, completely flawless if possible, restrictions.


Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, Boehringer Ingelheim, RosettaCommons