POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 const axios = require('axios'); const fs = require('fs'); const path = require('path'); async function toB64(imgPath) { const data = fs.readFileSync(path.resolve(imgPath)); return Buffer.from(data).toString('base64'); } const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/sd3-med-img2img"; const data = { "prompt": "photo of a boy holding phone on table,3d pixar style", "negative_prompt": "low quality,less details", "image": toB64('https://segmind-sd-models.s3.amazonaws.com/display_images/sd3-img2img-ip.jpg'), "num_inference_steps": 20, "guidance_scale": 5, "seed": 698845, "samples": 1, "strength": 0.7, "sampler": "dpmpp_2m", "scheduler": "sgm_uniform", "base64": false }; (async function() { try { const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } }); console.log(response.data); } catch (error) { console.error('Error:', error.response.data); } })();
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr *

Text prompt for image generation


negative_promptstr ( default: low quality,less details )

Negative text prompt to avoid certain qualities


imageimage *

Input image


num_inference_stepsint ( default: 20 )

Number of inference steps for image generation

min : 1,

max : 100


guidance_scalefloat ( default: 5 )

Guidance scale for image generation

min : 1,

max : 20


seedint ( default: 698845 )

Seed for random number generation


samplesint ( default: 1 )

Number of samples to generate


strengthfloat ( default: 0.7 )

Strength of the image transformation

min : 0,

max : 1


samplerenum:str ( default: dpmpp_2m )

Sampler for the image generation process

Allowed values:


schedulerenum:str ( default: sgm_uniform )

Scheduler for the image generation process

Allowed values:


base64bool ( default: 1 )

Base64 encoding of the output image

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stable Diffusion 3 Medium Image-to-Image

Stable Diffusion 3 Medium is a cutting-edge AI tool that uses advanced image-to-image technology to transform one image into another. It's powered with 2 billion parameters, letting it generate top-tier, realistic images by processing an initial image and a text prompt.

  • Capabilities: High-quality image transformations with efficient resource management, allowing operation on consumer-grade GPUs. It also provides adjustable transformation strengths to fine-tune outputs.

  • Creators:The model was developed by Stability AI.

  • Training Data Info: The details of the training data remain undisclosed, but it uses large and diverse image datasets.

  • Technical Architecture: The core architecture is based on a Diffusion Transformer, allowing complex image transformations.

  • Strengths: Exceptional image transformation quality, with broad creative possibilities. It's also optimized for efficient performance.

How to Use Stable Diffusion 3 Medium in Image-to-Image?

Step-by-Step Guide:

  1. Input Image: Click on the upload area, and upload an image in PNG, JPG, or GIF format, with a maximum resolution of 2048x2048 pixels.

  2. Set the Prompt: Enter a descriptive text prompt in the field to guide the image transformation.

  3. Seed: Optionally, set a seed value. Check the "Randomize Seed" box for unique outputs each time.

  4. Strength: Adjust the 'Strength' parameter to control how much the generated image should follow the input image.

  5. Negative Prompt: Enter text in the "Negative Prompt" field to specify what to avoid.

  6. Set Advanced Parameters: Control the number of refinement steps with 'Inference Steps'. 'Guidance Scale' balances between the prompt and generating unique images. Choose the method for the diffusion process with 'Sampler'. Lastly, select the scheduling algorithm for the diffusion process with 'Scheduler'.

  7. Generate: Click the "Generate" button to start the image generation process. The output image will appear once generation is complete.