POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/sd1.5-disneyB" # Request payload data = { "prompt": "(8k, best quality, masterpiece:1.2), (finely detailed),kuririn, 1boy,solo,cowboy shot, (bald:1.3), dwarf,(lime green long wizard robes:1.5), smile,open mouth, outdoors, (hut), forest, flowers, parted lips,black eyes,(black belt:1.3), (buckle), lime-green long sleeves,((purple stocking hat:1.2)), ((oversized clothes)), brown shoes,lime-green very long sleeves,arms behind back ", "negative_prompt": "bad-hands-5, (worst quality:2), (low quality:2),EasyNegative,lowres, ((1girl,fur trim,bangs,((hair)),((limes,sash)),underwear,necklace,choker,grass,motor vehicle,car,buttons,holding:1.2,monochrome,bad eyes,bad hands,underwear)), ((grayscale)", "scheduler": "dpmpp_sde_ancestral", "num_inference_steps": 25, "guidance_scale": 9, "samples": 1, "seed": 5735283, "img_width": 512, "img_height": 768, "base64": False } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr *

Prompt to render


negative_promptstr ( default: None )

Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'


schedulerenum:str ( default: UniPC )

Type of scheduler.

Allowed values:


num_inference_stepsint ( default: 20 ) Affects Pricing

Number of denoising steps.

min : 20,

max : 100


guidance_scalefloat ( default: 7.5 )

Scale for classifier-free guidance

min : 0.1,

max : 25


samplesint ( default: 1 ) Affects Pricing

Number of samples to generate.

min : 1,

max : 4


seedint ( default: -1 )

Seed for image generation.


img_widthenum:int ( default: 512 ) Affects Pricing

Width of the image.

Allowed values:


img_heightenum:int ( default: 512 ) Affects Pricing

Height of the Image

Allowed values:


base64boolean ( default: 1 )

Base64 encoding of the output image.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Disney

Stable Diffusion Disney AI model is a latent diffusion model that can be used to generate images from text prompts. It is a powerful tool for AI developers who want to experiment with creative text-to-image generation, especially for generating images in the style of Disney movies.

The model is trained on a massive dataset of images and text from Disney movies, and it is specifically designed to generate images that have the same kind of vibrant colors, expressive characters, and whimsical settings that are found in Disney movies. To use the model, you first need to provide a text prompt. The text prompt can be anything you want, such as a description of an image, a concept, or even just a few words.

Here are some tips for using Stable Diffusion Disney:

  1. Use clear and concise text prompts. The more specific your text prompt is, the more likely the model is to generate an image that matches your expectations.

  2. Experiment with different styles. Stable Diffusion Disney AI model can generate images in a variety of styles. Try different text prompts to see how the model generates different styles of images.

  3. Adjust the number of diffusion steps. The number of diffusion steps controls the level of detail in the image. More diffusion steps will result in a more detailed image, but it will also take longer to generate the image.

Applications/Use Cases

  1. Generating concept art based Disney movies.

  2. Creating fan art based on Disney movies.

  3. Designing merchandise based on Disney movies.