If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/luma-txt-2-video"
# Request payload
data = {
"prompt": "A teddy bear in sunglasses playing electric guitar and dancing",
"loop": False,
"aspect_ratio": "1:1"
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render a video
Check this to loop the video
Aspect ratio for output video
Allowed values:
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Luma Video (Text to Video) is a state-of-the-art AI model that revolutionizes video creation by transforming text prompts into high-quality, realistic videos. This innovative tool is tailored for content creators, marketers, and educators seeking to enhance their storytelling capabilities through dynamic visual narratives.
Luma's Dream Machine competes with leading models like OpenAI's Sora and Kuaishou's Kling by providing high-resolution video creation (up to 1080p) and realistic motion simulation.
Realistic Visuals: Utilizing advanced neural networks, Dream Machine generates videos with high accuracy in motion dynamics, object interactions, and environmental consistency.
Rapid Processing: Capable of producing 120 frames in 120 seconds, facilitating quick iterations and extensive experimentation.
Text-to-Video: Simply input your text, and watch as Dream Machine transforms it into a captivating video
Transformer-Based Architecture: Built on a transformer model trained on extensive video datasets, ensuring scalability and computational efficiency.
Universal Imagination Engine: Represents the initial phase of Luma’s broader initiative to develop a comprehensive imagination engine capable of diverse content generation tasks.
Cinematic Quality: Incorporates advanced camera motion algorithms to produce fluid, cinematic video sequences that align with the narrative and emotional tone of the input.
Versatile Camera Movements: The model allows users to experiment with a variety of fluid and naturalistic camera motions that align with the emotional tone and content of the scene. This flexibility enhances the storytelling aspect, enabling creators to convey their message more effectively. Just type the word "camera" followed by direction in your text prompt. Some camera motions you can try: Move Left/Right, Move Up/Down, Push In/Out, Pan Left/Right, Orbit Left/Right, Crane Up/Down.
Flexible Aspect Ratios: Luma Video supports various video aspect ratios, allowing creators to tailor their outputs for different platforms, from social media to widescreen presentations. This flexibility is crucial for meeting diverse content needs
Marketing Campaigns: Create engaging promotional videos that resonate with your target audience.
Educational Content: Transform educational materials into visually appealing videos that enhance learning experiences.
Social Media: Generate eye-catching content for platforms like Instagram, TikTok, and YouTube to increase engagement.
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process
Story Diffusion turns your written narratives into stunning image sequences.
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training