POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 const axios = require('axios'); const FormData = require('form-data'); const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/face-to-many"; const reqBody = { "seed": 1321321, "image": "https://segmind-sd-models.s3.amazonaws.com/display_images/Ftm_ip.png.jpg", "style": "3D", "prompt": "a person", "lora_scale": 1, "prompt_strength": 4.5, "denoising_strength": 0.65, "instant_id_strength": 1, "control_depth_strength": 0.8 }; (async function() { try { const formData = new FormData(); // Append regular fields for (const key in reqBody) { if (reqBody.hasOwnProperty(key)) { formData.append(key, reqBody[key]); } } // Convert and append images as Base64 if necessary const response = await axios.post(url, formData, { headers: { 'x-api-key': api_key, ...formData.getHeaders() } }); console.log(response.data); } catch (error) { console.error('Error:', error.response ? error.response.data : error.message); } })();
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


seedint ( default: -1 )

Fix the random seed for reproducibility


imagestr ( default: https://segmind-sd-models.s3.amazonaws.com/display_images/Ftm_ip.png.jpg )

An image of a person to be converted


styleenum:str ( default: 3D )

An enumeration.

Allowed values:


promptstr ( default: a person )


lora_scalefloat ( default: 1 )

How strong the LoRA will be

min : 0,

max : 1


custom_lora_urlstr ( default: 1 )

URL to a Replicate custom LoRA. Must be in the format https://replicate.delivery/pbxt/[id]/trained_model.tar or https://pbxt.replicate.delivery/[id]/trained_model.tar


negative_promptstr ( default: 1 )

Things you do not want in the image


prompt_strengthfloat ( default: 4.5 )

Strength of the prompt. This is the CFG scale, higher numbers lead to stronger prompt, lower numbers will keep more of a likeness to the original.

min : 0,

max : 20


denoising_strengthfloat ( default: 0.65 )

How much of the original image to keep. 1 is the complete destruction of the original image, 0 is the original image

min : 0,

max : 1


instant_id_strengthfloat ( default: 1 )

How strong the InstantID will be.

min : 0,

max : 1


control_depth_strengthfloat ( default: 0.8 )

Strength of depth controlnet. The bigger this is, the more controlnet affects the output.

min : 0,

max : 1

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Face to Many

With to Face to Many, you can turn a face in to different styles such as 3D, emoji, pixel art, video game, clay or toy.

  • 3D: Create a three-dimensional representation of the face.

  • Emoji: Turn the face into a fun, expressive emoji.

  • Pixel Art: Render the face in a retro, pixelated style reminiscent of early video games.

  • Video Game: Transform the face to resemble characters from video games.

  • Clay: Mold the face as if it were made from clay, similar to stop-motion animation characters.

  • Toy: Convert the face to look like a toy figure.

This model opens up a world of creative possibilities, making it easy to experiment with different artistic styles and representations.

Key Components of Face to Many

Under the hood of Face to sticker model is a combination of Instant ID + IP Adapter + ControlNet Depth

  1. Instant ID is responsible for identifying the unique features of the face of the person in the input image.

  2. An image encoder (IP Adapter) helps in transferring the various styles (3D, emoji, pixel art, video game, clay or toy. ) on to the face image of the person in the input image.

  3. ControlNet Depth estimates the depth of different parts of the face. This helps in creating a 3D representation of the face, which can then be used to apply the style seamlessly.

How to use Face to Many

  1. Input image: Choose an image that you want to transform. A close-up portrait shot is ideal because it allows the model to clearly identify and process the facial features.

  2. Prompt: Provide a text prompt based on the input image. This could be a simple description of the person in the image, such as “a man” etc. The model uses this prompt to guide the style transfer process.

  3. Style: Choose any style of your choice you want to see in the output image. (3D, Emoji, Toy, Clay, Pixels, Video game).

  4. Custom LoRA: You can incorporate other styles by using custom LoRA models based on SDXL. Simply paste the link to the custom LoRA model.

  5. Parameters: Adjust the below parameters to guide the final image output.

    a. Prompt Strength: This parameter is similar to the CGF scale. It determines how closely the image generation follows the text prompt. A higher value will result in an output image that more closely matches the prompt.

    b. Instant ID Strength: This parameter determines the degree of influence of Instant ID. The higher the value, the closer the face in the output image looks to the input image.

    d. ControlNet Depth Strength: This parameter determines the degree of influence of ControlNet Depth conditioning. The higher the value, the more its influence.