
Roblox announced the first iteration of its foundational AI model for generating 3D and 4D objects called Roblox Cube. The company announced Cube during last year's annual developer conference, and now the AI model is being demoed at the Game Developer Conference.
In an official blog, Roblox stated, "With Cube, we intend to make 3D creation more efficient. With 3D mesh generation, developers can quickly explore new creative directions and increase their productivity by deciding rapidly which to move forward with."
Cube 3D's first feature, the mesh generation API, is in beta. It allows developers to generate 3D models by typing simple commands like "generate a motorcycle" or "generate orange safety cone." The created mesh can then be further customized with textures and colors, making the process of designing objects faster and easier.
Developers can then use Cube 3D to create plugins or train models with their datasets. Unlike traditional 3D modeling, which relies on 2D images for reconstruction, Cube 3D is trained on actual 3D data from Roblox's platform. This ensures that the created objects seamlessly work within the game environment.
In the future, Roblox aims for "creators to be able to generate entire scenes based on multimodal inputs." Cube 3D's model takes inspiration from language models, which predict words in a sentence. Cube 3D follows a similar approach but with 3D objects. It breaks them into tokens and predicts the next shape to build the model.
Roblox has also announced three additional models: text generation, text-to-speech, and speech-to-text. These features are slated to launch in the coming months. With these generative AI models, the company aims to make 3D creation accessible to everyone, from professionals to everyday users.
3 Comments - Add comment