API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/hunyuan3d-2mv"
# Prepare data and files
data = {}
files = {}
data['seed'] = 1234
data['steps'] = 30
data['file_type'] = "glb"
# For parameter "back_image", you can send a raw file or a URI:
# files['back_image'] = open('IMAGE_PATH', 'rb') # To send a file
data['back_image'] = 'https://segmind-resources.s3.amazonaws.com/input/c2aaa604-ff80-4daa-8e17-19523116277b-df54088d-8eea-441d-8f74-8e38b17fd120.png' # To send a URI
# For parameter "left_image", you can send a raw file or a URI:
# files['left_image'] = open('IMAGE_PATH', 'rb') # To send a file
data['left_image'] = 'https://segmind-resources.s3.amazonaws.com/input/67a34835-feea-49b9-8aac-c6f7866c5812-c43ad400-6084-482b-9134-d0969e4b332c.png' # To send a URI
data['num_chunks'] = 200000
# For parameter "front_image", you can send a raw file or a URI:
# files['front_image'] = open('IMAGE_PATH', 'rb') # To send a file
data['front_image'] = 'https://segmind-resources.s3.amazonaws.com/input/54d29fd1-a48f-4935-bc7f-5446420e1436-a08f2dbd-00e5-4bcb-a6f0-ea11f31aa82f-0471efb6-5439-40e8-9031-5374b7f50691.png' # To send a URI
# For parameter "right_image", you can send a raw file or a URI:
# files['right_image'] = open('IMAGE_PATH', 'rb') # To send a file
data['right_image'] = 'https://segmind-resources.s3.amazonaws.com/input/10f44971-5c5c-44c2-8b6c-6baa7a98e76c-380c79fa-3dd9-4b0c-ba05-485e8894b019.png' # To send a URI
data['guidance_scale'] = 5
data['randomize_seed'] = True
data['target_face_num'] = 10000
data['octree_resolution'] = 256
data['remove_background'] = True
headers = {'x-api-key': api_key}
# If no files, send as JSON
if files:
response = requests.post(url, data=data, files=files, headers=headers)
else:
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Seed value
Number of inference steps
min : 1,
max : 100
An enumeration.
Allowed values:
Back view image
Left view image
Number of chunks
min : 1000,
max : 5000000
Front view image
Right view image
Guidance scale
Randomize seed
Target number of faces for mesh simplification
min : 100,
max : 1000000
Octree resolution
min : 16,
max : 512
Remove image background
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Hunyuan3D-2mv 3D Model Generator from Images
Hunyuan3D-2mv, developed by Tencent and released on March 18, 2025, is a specialized iteration of the Hunyuan3D-2 framework, designed to enhance 3D asset generation by leveraging multiview controlled shape generation. Built on a scalable flow-based diffusion transformer architecture, this model excels at producing detailed, high-resolution 3D meshes from multiple input images—typically front, back, left, and right views. It’s part of the broader Hunyuan3D 2.0 ecosystem, which aims to democratize 3D content creation through open-source tools, making it accessible to both professional developers and hobbyists.
Strengths
The core strength of Hunyuan3D-2mv lies in its ability to interpret and synthesize geometry from multiple perspectives, ensuring better consistency and accuracy in the resulting 3D models compared to single-image-based methods. This multiview approach allows the model to capture intricate details and align the generated mesh closely with the provided conditional images. The process is remarkably fast, making it suitable for rapid prototyping and iterative workflows. Additionally, its open-source nature—available on platforms like Hugging Face and GitHub—means developers can customize and integrate it into their own pipelines, whether through code or tools like Blender add-ons.
For developers
For developers, Hunyuan3D-2mv offers several practical use cases. In game development, it can accelerate the creation of textured 3D assets, reducing the time from concept to implementation. Film and animation studios can use it to generate props or characters from concept art, streamlining pre-production. It’s also valuable in virtual reality (VR) and augmented reality (AR) applications, where consistent 3D models from multiple angles are crucial for immersive experiences. The model supports output formats like GLB, OBJ, PLY, and STL, making it versatile for 3D printing or further editing in software like Blender or Unity.
Unique Architecture
The model’s two-stage pipeline—first generating a bare mesh, then optionally applying high-resolution textures via Hunyuan3D-Paint—provides flexibility, allowing developers to focus on shape generation or full asset creation as needed. This decoupling of shape and texture generation enhances control and adaptability.
Conclusion
In summary, Hunyuan3D-2mv is a powerful tool for developers seeking to automate and enhance 3D modeling workflows. Its multiview capability, speed, and open-source availability make it ideal for creating detailed, production-ready 3D assets across industries like gaming, film, and VR, while its extensibility invites innovation and experimentation.
Other Popular Models
sdxl-controlnet
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process

sdxl1.0-txt2img
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software

codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

sd2.1-faceswapper
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
