DALL·E is an extremely powerful AI tool based on GPT-3 that can create images out of practically any text you pass to it as an input. For example, it could even construct a completely ludicrous image of a dog cutting trees if you pass DALL·E a good enough description of what you want the AI model to draw. If you"re on social media, you may have seen image generations from DALL·E"s unofficial and less powerful counterpart DALL·E Mini.
Up until now, access to DALL·E has only been given to a handful of people but it will soon become available to a million more. This announcement was made by OpenAI, the company behind DALL·E.
DALL·E has entered beta testing phase. It"s a closed beta of sorts because access will only be given to one million people on the the waitlist, as selected by OpenAI. Each individual will receive 50 free credits in their first month and then 15 complimentary credits in subsequent months. They will also have the option to purchase 115 credit-increments for $15 each.
A single credit can be used to prompt DALL·E once, and in return, it will generate four images based on the text you passed to it. So if you purchase 115 credits for $15, you will be able to send it 115 prompts and receive 460 images in total. A single credit can also be used for an edit or variation prompt, but that will return three images.
What"s even more interesting is that DALL·E users will immediately get commercial rights to any images created from their prompts. This includes reprinting, selling, and merchandising. Theroretically, a character that you conjure up using free credits could potentially be worth millions commercially.
OpenAI has noted that it will be taking the following measures to ensure that DALL·E is safe to use and that it is not misused for malicious purposes:
- Curbing misuse: To minimize the risk of DALL·E being misused to create deceptive content, we reject image uploads containing realistic faces and attempts to create the likeness of public figures, including celebrities and prominent political figures. We also used advanced techniques to prevent photorealistic generations of real individuals’ faces.
- Preventing harmful images: We’ve made our content filters more accurate so that they are more effective at blocking images that violate our content policy — which does not allow users to generate violent, adult, or political content, among other categories — while still allowing creative expression. We also limited DALL·E"s exposure to these concepts by removing the most explicit content from its training data.
- Reducing bias: We implemented a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population. This technique is applied at the system level when DALL·E is given a prompt about an individual that does not specify race or gender, like "CEO".
- Monitoring: We will continue to have automated and human monitoring systems to help guard against misuse.
Finally, it is important to note that artists in need of financial assistance should fill out this form to get more details about subsidized access.
One million more people on DALL·E"s waitlist should be able to access the tool within the next few weeks.