If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sdxl1.0-realdream-pony-v9"
# Request payload
data = {
"prompt": "score_9, score_8_up, score_7_up, portrait photo of mature woman from brasil, sitting in restaurant, sun set",
"negative_prompt": "worst quality, low quality,cleavage,nfsw,naked, illustration, 3d, 2d, painting, cartoons, sketch",
"samples": 1,
"scheduler": "DPM++ 2M SDE Karras",
"num_inference_steps": 25,
"guidance_scale": 7,
"seed": 968875,
"img_width": 768,
"img_height": 1152,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
(worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth
Number of samples to generate.
min : 1,
max : 4
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 1,
max : 100
Scale for classifier-free guidance
min : 1,
max : 25
Seed for image generation.
min : -1,
max : 999999999999999
Can only be 1024 for SDXL
Allowed values:
Can only be 1024 for SDXL
Allowed values:
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Real Dream Pony V9 is an advanced image generation model that merges the Stable Diffusion XL (SDXL) architecture, excelling in photorealism. The model takes text prompts to produce highly realistic images, ideal for photography and digital art. This model builds on previous iterations for enhanced visual quality and detail.
Capabilities: Real Dream Pony V9 excels in producing highly photorealistic images with meticulously realistic lighting, making it an ideal tool for photographers and digital artists seeking to create lifelike visuals. The model supports various samplers and CFG scales, allowing users to adjust the quality and performance according to their specific needs.
Creators: The Real Dream Pony V9 model was developed by an individual or team of creators known as "sinatra." The developers have focused on refining and enhancing the model to achieve superior photorealism and lighting effects.
Training Data: The model’s training data comprises images sourced from several notable projects like Realistic Vision, RealVisXL, and epiCRealism.These datasets contribute significantly to the model’s ability to generate highly realistic images.
Technical Architecture: Real Dream Pony V9 is built on the base model architecture of Stable Diffusion XL (SDXL), a variant optimized for larger image generation tasks. The SDXL architecture facilitates complex image synthesis, enabling the model to maintain high levels of detail and realism in its outputs.
Strengths: The model is adept at generating images that are strikingly realistic.Its ability to manage lighting effectively enhances the photorealism of the images.Its flexibility with various samplers and CFG scales provides users with customizable options for image quality and computational performance.
Enter a detailed description of the image in the "Prompt" text box. Put a seed value in the "seed" box for reproducibility or enable the "Randomize Seed" toggle for randomness. Click on "Advanced Parameters" for more customization options.In the "Negative Prompt" field, indicate any unwanted elements in the image. Select a scheduler option from the dropdown, e.g., "DPM++ SDE." Indicate the number of steps for the generation process. Fine-tune image adherence to your text prompt with the guidance scale. et the output image dimensions using "img_width" and "img_height."Click "Generate" to create your image.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Monster Labs QrCode ControlNet on top of SD Realistic Vision v5.1
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training