If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/omni-zero"
# Prepare data and files
data = {}
files = {}
data['seed'] = 42
data['prompt'] = "A person"
# For parameter "base_image", you can send a raw file or a URI:
# files['base_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['base_image'] = 'IMAGE_URI' # To send a URI
# For parameter "style_image", you can send a raw file or a URI:
# files['style_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['style_image'] = 'IMAGE_URI' # To send a URI
data['guidance_scale'] = 3
# For parameter "identity_image", you can send a raw file or a URI:
# files['identity_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['identity_image'] = 'IMAGE_URI' # To send a URI
data['negative_prompt'] = "blurry, out of focus"
data['number_of_steps'] = 10
data['number_of_images'] = 1
# For parameter "composition_image", you can send a raw file or a URI:
# files['composition_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['composition_image'] = 'IMAGE_URI' # To send a URI
# For parameter "depth_image", you can send a raw file or a URI:
# files['depth_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['depth_image'] = 'IMAGE_URI' # To send a URI
data['base_image_strength'] = 0.15
data['depth_image_strength'] = 0.5
data['style_image_strength'] = 1
data['identity_image_strength'] = 1
data['composition_image_strength'] = 1
headers = {'x-api-key': api_key}
response = requests.post(url, data=data, files=files, headers=headers)
print(response.content) # The response is the generated image
Random seed for the model
Prompt for the model
Base image for the model
Style image for the model
Guidance scale for the model
min : 0,
max : 14
Identity image for the model
Negative prompt for the model
Number of steps for the model
min : 1,
max : 50
Number of images to generate
min : 1,
max : 4
Composition image for the model
Depth image for the model
Base image strength for the model
min : 0,
max : 1
Depth image strength for the model, if not supplied the composition image will be used for depth
min : 0,
max : 1
Style image strength for the model
min : 0,
max : 1
Identity image strength for the model
min : 0,
max : 1
Composition image strength for the model
min : 0,
max : 1
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Omni Zero is a powerful tool designed to create stylized portraits with zero-shot learning. Combining composition, style and identity images, Omni Zero can transform ordinary images into captivating artworks. Here are the key features:
Zero-Shot Composition: Omni Zero requires just one image to set the structure and the scene of the output image and does not require any training at all.
Zero-Shot Stylization: Omni Zero requires no training data or specific style examples. It adapts to any input image, making it ideal for artists, designers, and photographers.
Single Identity and Style: Omni Zero seamlessly integrates identity and style, allowing you to maintain the subject’s likeness while adding artistic flair to the image.
Multiple Identities and Styles (Work in Progress): Future updates will enable multi-identity and multi-style transformations, expanding creative possibilities.
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
Turn a face into 3D, emoji, pixel art, video game, claymation or toy
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training