API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/video-effects"
# Request payload
data = {
"subject": "person",
"negative_prompt": "blurry, bad quality, camera shake, distortion, poor composition, low resolution, artifact, watermark",
"effect": "squish_it",
"image": image_url_to_base64("https://segmind-resources.s3.amazonaws.com/input/d5495362-90af-401f-815c-a032c71f7787-wan-effect.png"), # Or use image_file_to_base64("IMAGE_PATH")
"seed": 30887452,
"video_length": 4,
"resolution": 560,
"steps": 30,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Describe your subject in 1-2 words
Negative prompt for video generation
Effects to be applied on the video
Allowed values:
Reference image for video generation
Seed number for video generation
Length of the generated video in seconds
min : 1,
max : 5
Resolution of the generated video (longest side of the video)
Allowed values:
Number of steps for video generation
min : 10,
max : 70
Output as base64
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Wan Video Effects
Unleash your creativity with the Wan Video Effects model, designed for anyone looking to enhance their videos with unique transformations. Easily apply a range of effects to personalize your content and captivate your audience. Whether for personal projects or professional use, Wan video effects offers a simple way to create engaging visuals.
Key Features of Wan Video Effects
-
Diverse Effect Selection: Choose from a wide variety of creative video effects like "muscle_me," "squish_it," "rotate_it," and many more, allowing for unique and engaging visual transformations.
-
Customizable Subject: Define the subject of your video in 1-2 words, ensuring the applied video effects are focused and relevant to your content.
-
Negative Prompt Control: Refine your generated video by specifying negative prompts, helping to avoid undesirable elements like blurriness or low quality for a polished final product.
-
Adjustable Video Length: Control the duration of your generated video in seconds, providing flexibility for various content needs and platforms.
-
Resolution Options: Select the desired resolution for your video, ensuring the output meets your specific quality requirements.
-
Step Control: Determine the number of steps for video generation, influencing the level of detail and refinement in your final video effect.
Use Cases
-
Social Media Content Creation: Enhance social media videos with eye-catching video effects like "baby_me" or "jungle_me" to increase engagement and virality.
-
Marketing and Advertising: Create unique promotional videos using effects like "inflate_it" or "crush_it" to grab attention and highlight product features.
-
Personalized Video Messages: Add fun and creative video effects like "pirate_me" or "princess_me" to personalize video messages for friends and family.
-
Artistic Video Exploration: Experiment with various video effects such as "mona_me" or "museum_me" to create unique and artistic visual content.
Video Effects model provides a simple yet powerful way to transform your videos with a variety of creative options. Customize your creations with adjustable parameters to generate engaging and unique visuals for any purpose.
Other Popular Models
sdxl-controlnet
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process

sadtalker
Audio-based Lip Synchronization for Talking Head Video

faceswap-v2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

sdxl1.0-txt2img
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
