If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/minimax-ai"
# Prepare data and files
data = {}
files = {}
data['prompt'] = "A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage."
data['prompt_optimizer'] = True
# For parameter "first_frame_image", you can send a raw file or a URI:
# files['first_frame_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['first_frame_image'] = 'IMAGE_URI' # To send a URI
headers = {'x-api-key': api_key}
response = requests.post(url, data=data, files=files, headers=headers)
print(response.content) # The response is the generated image
Text prompt for video generation
Use prompt optimizer
First frame image for video generation
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
MiniMax Video-01 is an innovative AI-native video generation model that allows users to create high-definition videos from text descriptions or images. This model represents a significant advancement in the field of AI video generation, offering capabilities that cater to content creators, marketers, and developers alike.
High-Definition Output: Video-01 generates videos at a resolution of 1280 x 720 pixels and a frame rate of 25 frames per second. This ensures that the videos maintain cinematic quality, complete with advanced camera movements and stylistic elements
Compression and Responsiveness: The model boasts high compression rates and excellent responsiveness to text inputs, allowing for quick generation of visually striking content
Video Length: Currently, the model supports video generation of up to 6 seconds, with plans to extend this to 10 seconds in future updates
Content Creation: Streamlining the process of generating high-quality videos and audio content.
Marketing Campaigns: Create engaging promotional videos quickly.
Content Creation: Ideal for social media influencers and digital content creators looking to enhance their visual storytelling.
Educational Materials: Generate informative videos based on textual content for educational purposes.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training