POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 const axios = require('axios'); const fs = require('fs'); const path = require('path'); async function toB64(imgPath) { const data = fs.readFileSync(path.resolve(imgPath)); return Buffer.from(data).toString('base64'); } const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/ai-product-photo-editor"; const data = { "product_image": toB64('https://segmind-sd-models.s3.amazonaws.com/display_images/ppv3-test/main-ip.jpeg'), "background_image": "toB64('https://segmind-sd-models.s3.amazonaws.com/display_images/ppv3-test/bg6.png')", "prompt": "photo of a mixer grinder in modern kitchen", "negative_prompt": "illustration, bokeh, low resolution, bad anatomy, painting, drawing, cartoon, bad quality, low quality", "num_inference_steps": 21, "guidance_scale": 6, "seed": 2566965, "sampler": "dpmpp_3m_sde_gpu", "scheduler": "karras", "samples": 1, "ipa_weight": 0.3, "ipa_weight_type": "linear", "ipa_start": 0, "ipa_end": 0.5, "ipa_embeds_scaling": "V only", "cn_strenght": 0.85, "cn_start": 0, "cn_end": 0.8, "dilation": 10, "mask_threshold": 220, "gaussblur_radius": 8, "base64": false }; (async function() { try { const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } }); console.log(response.data); } catch (error) { console.error('Error:', error.response.data); } })();
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


product_imageimage *

Product Image


background_imageimage *

Background Reference Image


promptstr *

Prompt for image generation


negative_promptstr ( default: illustration, bokeh, low resolution, bad anatomy, painting, drawing, cartoon, bad quality, low quality )

Negative prompt


num_inference_stepsint *

Number of steps to generate image

min : 20,

max : 100


guidance_scalefloat ( default: 6 )

Scale for classifier-free guidance

min : 0,

max : 10


seedint ( default: 2566965 )

Seed number for image generation


samplerenum:str *

Sampler

Allowed values:


schedulerenum:str ( default: karras )

Scheduler

Allowed values:


samplesint ( default: 1 )

Number of samples to generate


ipa_weightfloat ( default: 0.3 )

IP Adapter weight

min : 0,

max : 2


ipa_weight_typeenum:str ( default: linear )

Type of IP Adapter weight

Allowed values:


ipa_startfloat ( default: 1 )

IP Adapter start value

min : 0,

max : 1


ipa_endfloat ( default: 0.5 )

IP Adapter end value

min : 0,

max : 1


ipa_embeds_scalingenum:str ( default: V only )

IP Adapter embedding scaling

Allowed values:


cn_strenghtfloat ( default: 0.85 )

ControlNet strength

min : 0,

max : 2


cn_startfloat ( default: 1 )

ControlNet start value

min : 0,

max : 1


cn_endfloat ( default: 0.8 )

ControlNet end value

min : 0,

max : 1


dilationint ( default: 10 )

Dilation value

min : -100,

max : 100


mask_thresholdint ( default: 220 )

Mask threshold value

min : 0,

max : 255


gaussblur_radiusint ( default: 8 )

Gaussian blur radius

min : 0,

max : 20


base64bool ( default: 1 )

Output as base64

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

AI Product Photo Editor

AI Product Photo Editor leverages advanced image-based ML techniques to generate high-quality product visuals using text prompts, product images, and background images. This method combines inpainting, superimposition, and a dual-pass image generation process, employing Canny edge detection and IP-Adapter for background integration. The output enhances image details, ensuring high fidelity and professional-grade photos.

Capabilities:

  1. Can generate high-quality product images based on a combination of text prompts, product images, and background images.

  2. Employs inpainting with IP-Adapter and superimposition techniques for seamless image creation.

  3. Utilizes Canny edge detection to enhance edge details, ensuring sharp and defined product outlines.

  4. Executes a two-pass image generation process: the first pass integrates the product image with the background, and the second pass refines details like shadows and textures.

  5. Offers flexibility in modifying backgrounds or environments where the product is displayed, enhancing the visual appeal and context..

Technical Architecture: Combines inpainting with IP-Adapter using a reference image for background setting. Implements Canny edge detection to enhance and refine edge details, ensuring high-fidelity product images.

Employs a two-pass image generation process:

  1. First pass: Generates the base image integrating the product with the background.

  2. Second pass: Enhances finer details such as shadows and textures to ensure a photorealistic output. Concludes with a superimposition step to finalize and perfect the overall image composition.

Strengths: Capable of producing highly realistic and visually appealing product images. Flexibility in customizing image backgrounds and detailed enhancements offers wide-ranging applications. The two-pass generation process ensures high attention to detail, resulting in polished final images. Canny edge detection significantly improves the clarity and precision of product outlines.

How to use the model?

Step 1: Enter Prompt

Prompt: Describe the product image you want to create. For example, "Photos of plastic containers in a studio kitchen, minimal studio background."

Step 2: Upload Images

  • Product Image: Click on the upload area to browse and select your product image or drag and drop the image file.

  • Background Image: Click on the upload area to browse and select your background image or drag and drop the image file.

Step 3: Configure Negative Prompt (Optional)

Negative Prompt: Enter descriptions of elements you want to exclude from the generated image, such as "Illustration, broken, low resolution, bad anatomy."

Step 4: Set Inference Steps

Inference Steps: Enter the number of steps for the machine learning model to generate the image, e.g., 21.

Step 5: Set Randomization Seed

Seed: Enter a seed number for randomization to reproduce the same image on subsequent runs.

Step 6: Advanced Parameters

Click on the "Advanced Parameters" dropdown to reveal additional settings to further fine tune the outputs.

  • Guidance Scale: Adjusts how much the model adheres to the text prompt (higher value = stricter adherence).

  • Sampler: Selects the algorithm used for sampling; for example, "dpmpp_3m_sde_gpu."

  • Scheduler: Algorithmic scheduler for managing the sampling steps.

  • IPA Weight: The weight for the IP-Adapter controlling how much it influences the background image blending.

  • IPA Weight Type: The interpolation type for setting the IPA weight (e.g., linear).

  • IPA Start: Beginning point for the IP-Adapter influence.

  • IPA End: End point for the IP-Adapter influence.

  • IPA Embeds Scaling: Determines how embeddings from the IP-Adapter are scaled.

  • ControlNet Strength: Amount of control the ControlNet model has over the generation.

  • ControlNet Start: Start point for ControlNet influence.

  • ControlNet End: End point for ControlNet influence.

  • Dilation: Amount of dilation applied to the edges.

  • Mask Threshold: Threshold value for the masking process.

  • Gaussian Blur Radius: Radius for applying Gaussian blur to the image.