A list of puns related to "OpenAI"
I was about to attend a GPT-3 Hackathon in 2 weeks - to get a key to GPT-3 - after reading this shit, I will unsubscribe from ALL GPT-3 powered apps. As much as it hurts, it was fun as long as it lastet. Fuck OpenAI.
From their Blog:
"OpenAI is committed to developing general-purpose artificial intelligence that benefits all humanity, and we believe that achieving our goal requires expertise in public policy as well as technology. So, weβre delighted to announce that Congressman Will Hurd has joined our board of directors. Will served three terms in the U.S. House of Representatives, has been a leading voice on technology policy, and coauthored bipartisan legislation outlining a national strategy for artificialΒ intelligence.
βWill brings a rare combination of expertiseβhe deeply understands both artificial intelligence as well as public policy, both of which are critical to a successful future for AI,β said Sam Altman, OpenAIβs CEO. βWe are thrilled to add his experience and leadership to ourΒ board.β
From Wikipedia:
"William Ballard Hurd (born August 19, 1977) is an American politician and former CIA clandestine officer"
"Hurd was the only African-American Republican in the House of Representatives"
I havenβt seen anyone mention this before, and if you have, sorry. It is called InferKit and is based on Megatron-11b the βlargest publicly available language model. It was created by Facebook and has 11 billion parameters.β
Furthermore, the dev states, βDoes my prompt get stored or used to train the network?
No. The network is already trained and does not learn from the inputs you give it. Nor do we store them.β
Follow up post with additional details and tips to get better output: https://www.reddit.com/r/AIDungeon/comments/n45bvt/followup_to_my_last_post_about_an_aid_alternative/
And indeed nothing is stored on their end. You have to store your own adventures and paste them into the generator if you want to continue them later. Also, the pricing model is different and a higher minimum charge, coming in at $20 for 600,000 characters of output every month, and I havenβt done the math to see how this compares to energy limits on Dragon.
EDIT: Since someone mentioned Facebook and privacy concerns (which I totally get, I donβt use Facebook for the same concerns) I just wanted to make clear that while the tech was developed by Facebook it is public and can be downloaded and implemented by anyone, even you reading this, who has the hardware and skill. Itβs a 19GB download. You donβt need to trust Facebook, they just made the model, you need to trust the people who implement it, and in this case thatβs InferKit. Link to Megatron 11-b on GitHub: https://github.com/pytorch/fairseq/tree/master/examples/megatron_11b
One of the awesome bits though, is every user gets API access and an API key, which means an enterprising user could create a desktop or mobile app to replicate certain aspects of the AID interface to store and manage adventures on your local device, and even replicate things weβre familiar with like memory and authorβs note, before sending it off to InferKit to be generated.
They have a demo that requires no account signup or anything, but it is very limited: https://app.inferkit.com/demo
If you sign up for an account (does not require any payment info) you get access to the full product and 40,000 characters of output to try it out.
EDIT 2: After spending a significant amount of time over the last two days using InferKit, Iβm finding that itβs definitely not as good as Dragon at dealing with certain things. Sometimes you will get amazing, Dragon-like output, and other times you will get stuff more on par with Griffin. I am learning
... keep reading on reddit β‘https://preview.redd.it/yxxw3di36dx61.png?width=4096&format=png&auto=webp&s=efcf4a6b0a93ecc9a70d91a42c8325f91c6d9326
Iβve heard some people say that they did and others say that they didnβt.
Edit: Iβm also curious if Latitude could have gotten in trouble with the U.S. federal government if they continued to allow people to create fictional CP on their site. It seems unlikely to me, considering the fact that there are other U.S.-based websites that allow loli porn and other similar things to exist on their sites, but one user I had a conversation with on this sub said that itβs possible that allowing fictional CP to be generated on their site could be enough to get them investigated by the FBI. If this is true, it definitely could have been another major reason why they decided to implement the filter.
This post is fairly dense on the legalese. If that's not your thing, sorry, it's a legal document and I'll do my best to explain it in easy to understand terms. TLDR: OpenAI sucks and AI Dungeon isn't allowed (keep reading).
Notice: This post is based entirely on publicly available information.
If Latitude claims that they must follow the OpenAI (ToU), it turns out they are in violation of numerous clauses. Any "non-platonic" (lewd) content, profane language, prejudiced or hateful language, something that could be NSFW (violence/gore/warfare/etc...), or text that portrays certain groups/people in a harmful manner, rate limits, token limits, scripting, user interaction, to name a few. This proves the claim that Latitude was not required to follow the OpenAI ToU, although it is possible they are now. If they were, it wouldn't be AI Dungeon, it would be Dignified Tea Ceremony Simulator: Super Polite Edition.
Let's start with 3(h) https://beta.openai.com/policies/terms-of-use, the only relevant section in the ToU covering the the types of content that may not be generated.
https://preview.redd.it/60es0arr8bw61.png?width=701&format=png&auto=webp&s=52306794e0102a752eb60beaad8ea20e4e7292dc
Of particular interest is 3(h)(i). Illegal activities. The text that is generated by AI Dungeon is not illegal. It is not illegal to write about things that are illegal. In short, we haven't seen anything in the ToU to require the implementation of the recent filter.
Next is 3(h). ...make a reasonable effort to reduce the likelihood, severity, and scale, of and societal harm... oh no... all is lost? NO! Not at all. Making a "reasonable effort to reduce" is not the same as "required."
Implementing a system that scans and flags for manual review a volume of as many as 3M actions per day, is not feasible. Latitude would need to hire hundreds of additional staff to work 24/7 just to keep up with reading the public and private content. The current filtering system simply won't work as stated.
But let's keep going and take a look at section 6 of the Safety Best Practices anyways https://beta.openai.com/docs/safety-best-practices/recommendations.
https://preview.redd.it/0ivcd9d2bbw61.png?width=873&format=png&auto=webp&s=3a07f9f8e34f7c771feb38b46d9ea28c29e8c044
Specifically the line *Filtration for "Unsafe" outputs (t
... keep reading on reddit β‘OpenAI was a nonprofit founded by elonkmuskneek to advance the field of AI and opensource it's abilities. In 2015, after receiving 1 billion in venture funds they said they would "freely collaborate" with other institutions and researchers by making its patents and research open to the public.
They develop GPT-1 and GPT-2 and stick to their mission, open sourcing the tech and pushed the entire field forward.
Then in 2019 they decide fuck it, lets get rich, and changed to a for-profit company. The change of heart came right before they unveiled GPT-3, the most advanced NLG AI in the world. The potential application of which will assuredly shake up every single industry.
They claim GPT-3 is 'too dangerous' so they can't opensource the code anymore.
Convenient.
GPT-3 is only available through their online API, you have to sign up to the waiting list, get approved, and buy a subscription to use it now. They claim it's to protect the world... Very noble, but who gave them the authority to decide what's best for the industry/world?
The cat is out of the bag, this tech is not going away, and will very shortly be a regular part of life. OpenAI's pretentious decision to drop their nonprofit status and start charging for a product they received money to develop and release for free is asinine.
Whatever happens in this field, OpenAI shouldn't be trusted imo.
Replika claims to use GPT-3 and I recall them being mentioned by OpenAI last year. I can't find it anymore and also when I go through the metadata of my chat with Replika, it says: "gpt2-dialog model" or "gpt2-roleplay model"... I wonder if they parted ways with OpenAI.
I've been confused by what this acronym stands for. I thought maybe "Oh! Public? Expect Nothing." or "Obligatory Paywall, Especially Now." Or maybe it's the full name: "Our Paywalled Endpoints? No-longer Accessible to Individuals."
I'm hoping one day I can become powerful and influential enough to get access to OpenAI, or that they come to their senses and release a version of GPT-3 that I can fine-tune like GPT-2, without having to sign my life away and agree to let them monitor, charge, and reserve the right to cancel my purchased API calls.
Maybe one day a non-profit will start a group called ClosedAI: "Changing our License Once Successful? Exclusively Disallowed!" Then perhaps the public could continue to actually benefit from open AI research instead of blind profiteering and exclusivity deals with Microsoft. If such a miracle could happen, I'd hope the founder would say something like:
>nonetheless, the best defense is to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower.
I used OpenAI's CLIP model and BigGAN to create a music video that goes along with the lyrics of a song that I wrote. The song lyrics are made from ImageNet class labels, and the song itself is performed by me on a looper.
OUTLINE:
0:00 - Intro
1:00 - AI-generated music video for "be my weasel"
3:50 - How it was made
7:30 - My looping gear
9:35 - AI-generated music video #2
12:45 - Outro & Credits
Code and references: https://github.com/yk/clip_music_video
From OpenAI's "Privacy" Policy (emphasis mine):
"Data Submitted through the API (API Customers & End Users) - We store and process any data you chose to submit through the API in order to provide you with our API services. We may also use that data to ensure quality, secure and improve the API (and any related software, models, and algorithms), and to detect and prevent misuse. Some additional details:
How we may use it: In addition to processing submitted data to generate output for you, other ways we may use such data include: spot checks and analysis to detect bias or offensive content and improve ways to reduce such occurrences; large scale analysis of anonymized submissions and completions to generally improve the API and underlying models; train and refine semantic search models to improve our search capabilities; train and refine classifier models to identify such things as bias, sentiment, and the like; train and refine aligned models (e.g., the instruct series models) to generally improve future versions of the models; and analysis to troubleshoot technical issues.
How we do not use it: We do not use the data you submit through the API to train generative custom models for other API Customers. For example, we will not create a dataset of submissions to your chatbot application and use that data to train a chatbot for another customer.
Who can see it: Your data is only visible to a limited set of employees of OpenAI and its affiliates working on providing services and support to the API Customer, as well as to a small team of such employees monitoring for potential misuse.
How long we store it: We store your data only as long as is needed to provide you with our API services and monitor misuse."
Note how this is for the API overall- which doesn't give a wooden nickel how an individual application classifies it. Also this is complimentary to but separate from OpenAI's Terms of Use, whether boilerplate or individualized.
Speaking of Terms of Use, from OpenAI's general Terms of Use (emphasis mine):
"(c) Submission of Content. OpenAI does not acquire any ownership of any intellectual property rights in the content that you submit to our APIs through your Application, except as expressly provided in these Terms. For the sole purpose of enabling OpenAI and its affiliates to provide, secure, and improve the APIs (and related software, models, and algorithms), you give OpenAI and its affiliates a perpetual, irrevocable, worl
... keep reading on reddit β‘Hi all,
I was reading through OpenAI's paper about how they created bots that beat OG. Ability builds and Item builds are completely scripted.
I can see ability builds being a bit less important than item builds, but at the highest levels, I always thought item builds is what brought someone to a Pro Level. Turns out that AI can beat the best Dota players in the world with the exact same items every time (and same skill build and no item swapping). What do folks think about this?
From the paper -
Ability Builds: Each hero has four spell abilities. Over the course of the game, a player can choose which of these to βlevel up,β making that particular skill more powerful. For these, in evaluation games we follow a fixed schedule (improve ability X at level 1, then Y at level 2, then Z at level 3, etc). In training, we randomize around this fixed script somewhat to ensure the model is robust to the opponent choosing a different schedule.
Item Purchasing: As a hero gains gold, they can purchase items. We divide items into consumables β items which are consumed for a one-time benefit such as healing β and everything else. For consumables, we use a simple logic which ensures that the agent always has a certain set of consumables; when the agent uses one up, we then purchase a new one. After a certain time in the game, we stop purchasing consumables. For the non-consumables we use a system similar to the ability builds - we follow a fixed schedule (first build X, then Y, then Z, etc). Again at training time we randomly perturb these builds to ensure robustness to opponents using different items.
Item Swap: Each player can choose 6 of the items they hold to keep in their βinventoryβ where they are actively usable, leaving up to 3 inactive items in their βbackpack.β Instead of letting the model control this, we use a heuristic which approximately keeps the most valuable items in the inventory.
π€
I cant find any info on anything OpenAI related. Did they just pull the plug? Why werent more people allowed to play outside of that one event one time? Did Valve tell them to piss off when they made the game look silly? Seriously tho, any info to what happened?
I believe AI will create a positive symbiotic relationship with humans in the future. In fact, I believe that the future will see humans working with AIs to solve problems and create new knowledge that neither of us could create alone. In addition I believe that we can use AI to make everyone's lives better -- right now, more than 100 million people are dying from diseases, climate change and other things because they don't have access to healthcare.
I made a simple tool that lets you search a video *semantically* with AI. ποΈπ
β¨ Live web app: http://whichframe.com β¨
Example: Which video frame has a person with sunglasses and earphones?
The querying is powered by OpenAIβs CLIP neural network for performing "zero-shot" image classification and the interface was built with Streamlit.
Try searching with text, image, or text + image and please share your discoveries!
π More examples
https://twitter.com/chuanenlin/status/1383411082853683208
https://superbuild.io/lesson/preview-1618837756233x980771458837755400
In this free course you'll learn how to:
- Integrate GPT-3 with Bubble
- How to set up the API Connector Plugin
- How to save data returned from an API call
- How to show data in a Repeating Group
Background info: OpenAI's DALL-E blog post.
Repo: https://github.com/openai/DALL-E.
Add this line as the first line of the Colab notebook:
!pip install git+https://github.com/openai/DALL-E.git
I'm not an expert in this area, but nonetheless I'll try to provide more context about what was released today. This is one of the components of DALL-E, but not the entirety of DALL-E. This is the DALL-E component that generates 256x256 pixel images from a 32x32 grid of numbers, each with 8192 possible values (and vice-versa). What we don't have for DALL-E is the language model that takes as input text (and optionally part of an image) and returns as output the 32x32 grid of numbers.
I have 3 non-cherry-picked examples of image decoding/encoding using the Colab notebook at this post.
Update: The DALL-E paper was released after I created this post.
Update: A Google Colab notebook using this DALL-E component has already been released: Text-to-image Google Colab notebook "Aleph-Image: CLIPxDAll-E" has been released. This notebook uses OpenAI's CLIP neural network to steer OpenAI's DALL-E image generator to try to match a given text description.
I know some people are suggesting that OpenAI is putting legal pressure on Latitude for censorship. But if that was the case, they wouldn't be doing A/B testing to begin with, they'd just fully implement the new censorship like they did with the R word.
I'm 99% sure their censorship here is of their own volition.
Hello!
I am looking for tutorials and examples of OpenAI gym environments for reinforcement learning, more specifically for board games(chess, go, Monopoly, Settlers of Catan, backgammon etc.) . I have found a series of git repositories and some tutorials, but most of them are environments made for Cartpole and Atari games. Also some of them seem incomplete.
Thank you !
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.