If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/runway-gen3-alphaturbo"
# Request payload
data = {
"promptText": "an astronaut with a mirrored helmet running in the field of sunflowers",
"promptImage": "https://segmind-sd-models.s3.amazonaws.com/display_images/runway-gen3-input.png",
"seed": 56698,
"ratio": "16:9",
"duration": 5
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Text prompt to describe the desired animation effect
URL of the input image to be animated. A HTTPS URL pointing to an image. Images must be JPEG, PNG, or WebP and are limited to 16MB
Seed for random generation
min : 1,
max : 99999
The aspect ratio of the output video.
Allowed values:
Duration of the animation in seconds
Allowed values:
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Runway’s Gen-3 Alpha Turbo is a cutting-edge AI model designed to transform static images into dynamic videos with remarkable fidelity and motion. This model is part of the Gen-3 Alpha family, offering enhanced speed and cost-efficiency, making it an ideal choice for creators seeking high-quality video generation.
High Fidelity and Motion: Gen-3 Alpha Turbo excels in producing videos with superior image quality and smooth motion transitions. This is achieved through advanced algorithms that interpret and animate static images with precision.
Speed and Efficiency: The Turbo variant of Gen-3 Alpha is optimized for faster processing times. Users can generate videos more quickly and at a reduced cost, making it accessible for various project scales.
Versatile Input Options: The model supports both text and image prompts, allowing users to guide the video generation process with detailed descriptions or visual references. This flexibility ensures that the output aligns closely with the creator’s vision.
Extended Video Durations: Gen-3 Alpha Turbo supports video durations of up to 10 seconds per generation, with the ability to extend videos in increments of 8 seconds. This feature is particularly useful for creating longer, more complex video sequences.
Uploading an Image: Start by uploading an image that will serve as the initial frame of your video.
Enter text Prompt: Provide a descriptive text prompt that outlines the desired camera angles, subject movements, and scene transitions. For example, “A dramatic zoom in on the face of a movie villain as he raises an eyebrow, with lights shifting to cast an eerie red glow.”
Generating the Video: Click the “Generate” button to initiate the video creation process. The model will interpret the image and text prompt to produce a video that matches your specifications.
Configuring Settings: Adjust the output settings such as resolution (1280x768 or 768x1280) and duration 5s or 10 s to suit your project requirements
Detailed Prompts: The more detailed and specific your text prompt, the better the model can understand and execute your vision. Include elements like camera movements, lighting conditions, and scene descriptions.
High-Quality Images: Use high-resolution images to ensure that the initial frame of your video is clear and detailed. This will enhance the overall quality of the generated video.
Iterative Refinement: Experiment with different prompts and settings to refine the output. The iterative process allows you to achieve the best possible results by fine-tuning the input parameters.
For more information, visit https://www.Runwayml.com
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process
InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training