POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/live-portrait" # Request payload data = { "face_image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/liveportrait-input.jpg"), # Or use image_file_to_base64("IMAGE_PATH") "driving_video": "https://segmind-sd-models.s3.amazonaws.com/display_images/liveportrait-video.mp4", "live_portrait_dsize": 512, "live_portrait_scale": 2.3, "video_frame_load_cap": 128, "live_portrait_lip_zero": True, "live_portrait_relative": True, "live_portrait_vx_ratio": 0, "live_portrait_vy_ratio": -0.12, "live_portrait_stitching": True, "video_select_every_n_frames": 1, "live_portrait_eye_retargeting": False, "live_portrait_lip_retargeting": False, "live_portrait_lip_retargeting_multiplier": 1, "live_portrait_eyes_retargeting_multiplier": 1 } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


face_imageimage *

An image with a face


driving_videostr *

A video to drive the animation


live_portrait_dsizeint ( default: 512 )

Size of the output image

min : 64,

max : 2048


live_portrait_scalefloat ( default: 2.3 )

Scaling factor for the face

min : 1,

max : 4


video_frame_load_capint ( default: 128 )

The maximum number of frames to load from the driving video. Set to 0 to use all frames.


live_portrait_lip_zerobool ( default: true )

Enable lip zero


live_portrait_relativebool ( default: true )

Use relative positioning


live_portrait_vx_ratiofloat ( default: 1 )

Horizontal shift ratio

min : -1,

max : 1


live_portrait_vy_ratiofloat ( default: -0.12 )

Vertical shift ratio

min : -1,

max : 1


live_portrait_stitchingbool ( default: true )

Enable stitching


video_select_every_n_framesint ( default: 1 )

Select every nth frame from the driving video. Set to 1 to use all frames.


live_portrait_eye_retargetingbool ( default: 1 )

Enable eye retargeting


live_portrait_lip_retargetingbool ( default: 1 )

Enable lip retargeting


live_portrait_lip_retargeting_multiplierfloat ( default: 1 )

Multiplier for lip retargeting

min : 0.01,

max : 10


live_portrait_eyes_retargeting_multiplierfloat ( default: 1 )

Multiplier for eye retargeting

min : 0.01,

max : 10

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Live Portrait

Live Portrait is an advanced AI-driven portrait animation framework. Unlike mainstream diffusion-based methods, Live Portrait leverages an implicit-keypoint-based framework for creating lifelike animations from single source images.

Key Features of Live Portrait

  1. Efficient Animation: LivePortrait synthesizes lifelike videos from a single source image, using it as an appearance reference. The motion (facial expressions and head pose) is derived from a driving video, audio, text, or generation.

  2. Stitching and Retargeting: Instead of following traditional diffusion-based approaches, LivePortrait explores and extends the potential of implicit-keypoint-based techniques. This approach effectively balances realism and expressiveness.

Use Cases of Live Portrait

  • Bring life to historical figures: Imagine educational content or documentaries featuring animated portraits of historical figures with realistic expressions. Live Portrait allows you to create engaging narratives by adding subtle movements and emotions to portraits.

  • Create engaging social media content: Stand out from the crowd with captivating animated profile pictures or eye-catching social media posts featuring your own portrait brought to life. Live Portrait lets you personalize your content and grab attention with dynamic visuals.

  • Enhance e-learning experiences: Make educational content more interactive and engaging for learners of all ages. Animate portraits of educators or characters to explain concepts in a lively and memorable way.

  • Personalize avatars and characters: Design unique and expressive avatars for games, apps, or virtual reality experiences. Live Portrait allows you to create avatars with realistic facial movements that enhance user interaction.