If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sd1.5-epicrealism"
# Request payload
data = {
"prompt": "RAW commercial photo the pretty instagram fashion model, ((smiling)), in the red full wrap around dress posing, in the style of colorful geometrics, guy aroch, helene knoop, glowing pastels, bold lines, bright colors, sun-soaked colours, Fujifilm X-T4, Sony",
"negative_prompt": "airbrushed,3d, render, painting, anime, manga, illustration, (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation,bige yes, teeth,nose piercing,(((extra arms)))cartoon,young,child, nsfw ",
"scheduler": "dpmpp_sde_ancestral",
"num_inference_steps": 25,
"guidance_scale": 9,
"samples": 1,
"seed": 10452167572,
"img_width": 512,
"img_height": 768,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 20,
max : 100
Scale for classifier-free guidance
min : 0.1,
max : 25
Number of samples to generate.
min : 1,
max : 4
Seed for image generation.
Width of the image.
Allowed values:
Height of the Image
Allowed values:
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Stable Diffusion Epic Realism AI model is a latent diffusion model that can be used to generate images from text prompts. It is a powerful tool for AI developers who want to experiment with creative text-to-image generation, especially for generating photorealistic images.
The model is trained on a massive dataset of images and text, and it is specifically designed to generate images with a high degree of realism. To use the model, you first need to provide a text prompt. The text prompt can be anything you want, such as a description of an image, a concept, or even just a few words.
Here are some tips for using Stable Diffusion Edge of Realism:
Go to the Segmind website: https://segmind.com/
Click on the "Models" tab and select "Epic Realism".
Click on the "Try it out" button and upload an image that you want to use as a starting point.
Click on the "Generate" button to generate image.
Use clear and concise text prompts. The more specific your text prompt is, the more likely the model is to generate an image that matches your expectations.
Experiment with different styles. Stable Diffusion Epic Realism AI model can generate images in a variety of styles. Try different text prompts to see how the model generates different styles of images.
Adjust the number of diffusion steps. The number of diffusion steps controls the level of detail in the image. More diffusion steps will result in a more detailed image, but it will also take longer to generate the image.
If you are interested in experimenting with the tool, contact us for customized solutions, large-scale deployment, and research support.
Generating concept art for movies and video games.
Creating marketing materials, such as product images and social media graphics.
Designing user interfaces (UIs) for websites and apps.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training