VJ UNION

Cover image for Point·E - Open AI 3D Modeling
vdmo
vdmo

Posted on

Point·E - Open AI 3D Modeling

https://github.com/openai/point-e

This is the official code and model release for Point-E: A System for Generating 3D Point Clouds from Complex Prompts.

OpenAI had made a recent announcement about release of its newest picture-making machine POINT-E, which can produce 3D point clouds directly from text prompts. Whereas existing systems like Google's DreamFusion typically require multiple hours — and GPUs — to generate their images, Point-E only needs one GPU and a minute or two. Ref

point-e

GitHub logo openai / point-e

Point cloud diffusion for 3D model synthesis

Point·E

Animation of four 3D point clouds rotating

This is the official code and model release for Point-E: A System for Generating 3D Point Clouds from Complex Prompts.

Usage

Install with pip install -e ..

To get started with examples, see the following notebooks:

  • image2pointcloud.ipynb - sample a point cloud, conditioned on some example synthetic view images.
  • text2pointcloud.ipynb - use our small, worse quality pure text-to-3D model to produce 3D point clouds directly from text descriptions. This model's capabilities are limited, but it does understand some simple categories and colors.
  • pointcloud2mesh.ipynb - try our SDF regression model for producing meshes from point clouds.

For our P-FID and P-IS evaluation scripts, see:

For our Blender rendering code, see blender_script.py

Samples

You can download the seed images and point clouds corresponding to the paper banner images here.

You can download the seed images used for COCO CLIP R-Precision evaluations here.




Discussion (0)