If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sd1.5-revanimated"
# Request payload
data = {
"prompt": "advanced aircraft, gundam, dark black robot, spaceship, long, giant guns, futuristic design, scifi, in space, supernova, stars, planets, (8k, RAW photo, best quality, ultra high res, photorealistic, masterpiece, ultra-detailed, Unreal Engine),best quality, warrior,((cinematic look)), insane details, advanced weapon, fight, battle, epic, power, combat, shoot, shooting, missiles, bombs, explosions, rockets, jetpack, defence, attacking,wide angle",
"negative_prompt": "boring, poorly drawn, bad artist, (worst quality:1.4), simple background, uninspired, (bad quality:1.4), monochrome, low background contrast, background noise, duplicate, crowded, (nipples:1.2), big breasts",
"scheduler": "ddim",
"num_inference_steps": 25,
"guidance_scale": 9,
"samples": 1,
"seed": 3426017487234,
"img_width": 512,
"img_height": 768,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 20,
max : 100
Scale for classifier-free guidance
min : 0.1,
max : 25
Number of samples to generate.
min : 1,
max : 4
Seed for image generation.
Width of the image.
Allowed values:
Height of the Image
Allowed values:
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
ReV Animated is an innovative model that brings a new dimension to image generation. This model is a product of a checkpoint merge, combining the capabilities of multiple models to create unique and dynamic outputs. With a focus on 2.5D-like image generations, ReV Animated can produce a variety of styles including Fantasy, Anime, Semi-Realistic, and Landscape. The model's understanding of prompts and body poses is impressive, allowing it to generate images of characters holding weapons, becoming giantesses, and exhibiting a range of facial expressions. Designed to work best on a resolution of 512x512, ReV Animated is a LoRa friendly model that brings your prompts to life.
From a technical perspective, ReV Animated is based on the Stable Diffusion 1.5 base model. The model's architecture gives more weight to words at the beginning of your prompt, meaning the order of your prompt matters. The optimal prompt order is content type, description, style, and then composition. For those seeking anime-2.5D type images, the model responds well to prompts beginning with phrases like "((best quality))", "((masterpiece))", or "(detailed)". This model excels at creating PORTRAITS, making it a powerful tool for artists and creators.
Its ability to understand and interpret prompts allows for a high degree of customization in the generated images. The model's proficiency in creating a variety of styles, from fantasy to semi-realistic, makes it versatile and adaptable. Furthermore, its understanding of body poses and facial expressions adds a level of detail and realism to the images it generates.
Character Design: Artists can use ReV Animated to generate unique character designs for their projects.
Concept Art: Game developers and filmmakers can use the model to create concept art for their projects.
Illustration: Illustrators can use ReV Animated to generate detailed and dynamic illustrations.
Animation: Animators can use the model to create frames for their animations.
The license for the ReV Animated model, known as the "CreativeML Open RAIL-M" license, is designed to promote both open and responsible use of the model. You may add your own copyright statement to your modifications and provide additional or different license terms for your modifications. You are accountable for the output you generate using the model, and no use of the output can contravene any provision as stated in the license.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
InstantID aims to generate customized images with various poses or styles from only a single reference ID image while ensuring high fidelity
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training