In a paper, researchers test the best language models (GPT-2/3/Neo) at solving programming questions from coding interviews. Results aren't particularly groundbreaking but show potential.
Gpt-2 RSS
DALL·E, a distilled version of GPT-3 with 12 billion parameters, is a transformer model that produces images from a given caption. The results are impressive with the model capturing minute details.
OpenAI has exclusively licensed the largest transformer model to date—GPT-3—to Microsoft. The latter will use GPT-3's NLG and NLP capabilities in building AI solutions for its customers.
Researchers at Google have developed BLUERT, an automatic metric that gauges the performance of natural language generation models and delivers SOTA performance on two academic benchmarks.
The firm has released the next iteration in its staged release of GPT-2. Alongside it, a paper has also been posted that studies the release strategies and social impacts of the same.
OpenAI has developed a model that can produce realistic textual responses, however, the firm has opted not to release it due to its potentially malicious applications in fake news and impersonation.