1const axios = require('axios');
2
3
4const api_key = "YOUR API-KEY";
5const url = "https://api.segmind.com/v1/sd1.5-disneyB";
6
7const data = {
8 "prompt": "(8k, best quality, masterpiece:1.2), (finely detailed),kuririn, 1boy,solo,cowboy shot, (bald:1.3), dwarf,(lime green long wizard robes:1.5), smile,open mouth, outdoors, (hut), forest, flowers, parted lips,black eyes,(black belt:1.3), (buckle), lime-green long sleeves,((purple stocking hat:1.2)), ((oversized clothes)), brown shoes,lime-green very long sleeves,arms behind back ",
9 "negative_prompt": "bad-hands-5, (worst quality:2), (low quality:2),EasyNegative,lowres, ((1girl,fur trim,bangs,((hair)),((limes,sash)),underwear,necklace,choker,grass,motor vehicle,car,buttons,holding:1.2,monochrome,bad eyes,bad hands,underwear)), ((grayscale)",
10 "scheduler": "dpmpp_sde_ancestral",
11 "num_inference_steps": 25,
12 "guidance_scale": 9,
13 "samples": 1,
14 "seed": 5735283,
15 "img_width": 512,
16 "img_height": 768,
17 "base64": false
18};
19
20(async function() {
21 try {
22 const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } });
23 console.log(response.data);
24 } catch (error) {
25 console.error('Error:', error.response.data);
26 }
27})();
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 20,
min : 100
Scale for classifier-free guidance
min : 0.1,
min : 25
Number of samples to generate.
min : 1,
min : 4
Seed for image generation.
Width of the image.
Allowed values:
Height of the Image
Allowed values:
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Stable Diffusion Disney AI model is a latent diffusion model that can be used to generate images from text prompts. It is a powerful tool for AI developers who want to experiment with creative text-to-image generation, especially for generating images in the style of Disney movies.
The model is trained on a massive dataset of images and text from Disney movies, and it is specifically designed to generate images that have the same kind of vibrant colors, expressive characters, and whimsical settings that are found in Disney movies. To use the model, you first need to provide a text prompt. The text prompt can be anything you want, such as a description of an image, a concept, or even just a few words.
Here are some tips for using Stable Diffusion Disney:
Stable Diffusion Disney AI model is a powerful tool for AI developers who want to experiment with creative text-to-image generation for images in the style of Disney movies. It is easy to use, and it can be used to generate images in a variety of styles.
If you are interested in experimenting with the tool, contact us for customized solutions, large-scale deployment, and research support.
##Applications/Use Cases