If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sdxl1.0-dreamshaper"
# Request payload
data = {
"prompt": "cinematic photo of portrait of cyberpunk (the Grumpy Cat:1.25) in a spacesuit, looking with endless sadness at the universe passing by, cyberpunk 2077 city bg, 2d masterpiece by john Wilhelm, (grumpy:1.2), (cyberpunk:1.4), photo-realistic, octane render, hdr, neon, lens flares, ( best quality:1.9), active asymmetrical pose, (action-packed:1.8), trending on artstation, 8k, 35mm photograph, film, bokeh, professional, 4k, highly detailed",
"negative_prompt": "drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly, [deformed | disfigured], poorly drawn, [bad : wrong] anatomy, [extra | missing | floating | disconnected] limb, (mutated hands and fingers), blurry",
"samples": 1,
"scheduler": "UniPC",
"num_inference_steps": 35,
"guidance_scale": 7,
"seed": 1135424276,
"img_width": 896,
"img_height": 1152,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Number of samples to generate.
min : 1,
max : 4
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 20,
max : 100
Scale for classifier-free guidance
min : 1,
max : 25
Seed for image generation.
min : -1,
max : 999999999999999
Image width can be between 512 and 2048 in multiples of 8
Image height can be between 512 and 2048 in multiples of 8
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Dreamshaper XL, the latest gem in the illustrious Dreamshaper series. Rooted in the robust SDXL framework, this model emerges as a beacon of adaptability and versatility in the world of Stable Diffusion. With Dreamshaper XL, the boundaries of imagination are expanded, offering a canvas where classic artistry meets modern digital design.
Unparalleled Versatility: Dreamshaper SDXL's adaptability allows it to cater to a vast spectrum of design needs, making it a one-stop solution for diverse creative endeavors.
Enhanced Performance: Harnessing the power of SDXL, Dreamshaper SDXL outshines its predecessors, delivering superior quality and detail in every output.
Broad Creative Spectrum: From classic art renditions to modern digital designs, the model's range is boundless.
Innovative Techniques: Leveraging state-of-the-art techniques, the model ensures every generated piece is a masterpiece in its own right.
Digital Art Creation: Artists can tap into Dreamshaper SDXL to craft vibrant digital artworks or timeless classic pieces.
Gaming Industry: Game developers can utilize the model for character design, ensuring diverse and detailed in-game characters.
Film and Animation: Filmmakers and animators can harness DreamshaperSD XL for character visualization and scene creation.
Design and Illustration: Designers can visualize concepts, from product designs to book illustrations, with unparalleled detail.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Monster Labs QrCode ControlNet on top of SD Realistic Vision v5.1
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training