1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/expression-editor"
# Request payload
data = {
"aaa": 0,
"blink": 0,
"eee": 0,
"eyebrow": 0,
"image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/ep-editor-ip.png"), # Or use image_file_to_base64("IMAGE_PATH")
"image_format": "png",
"image_quality": 95,
"pupil_x": 0,
"pupil_y": 0,
"rotate_pitch": 0,
"rotate_roll": 0,
"rotate_yaw": 0,
"sample_parts": "OnlyExpression",
"smile": 1,
"wink": 0,
"woo": 0
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Mouth shape for 'aaa'
min : -30,
max : 30
Blink intensity
min : -20,
max : 5
Mouth shape for 'eee'
min : -15,
max : 15
Eyebrow position adjustment
min : -10,
max : 10
URL of the Input Face Image
Output image format
Allowed values:
Image quality
min : 10,
max : 100
Pupil X-axis position
min : -15,
max : 15
Pupil Y-axis position
min : -15,
max : 15
URL of the reference expression image
Pitch rotation in degrees
min : -20,
max : 20
Roll rotation in degrees
min : -20,
max : 20
Yaw rotation in degrees
min : -20,
max : 20
Part of the face to sample
Allowed values:
Smile intensity
min : -0.3,
max : 1.3
Wink intensity
min : 0,
max : 25
Mouth shape for 'woo'
min : -15,
max : 15
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
The Expression Editor is an advanced AI model designed to transform facial expressions in images. By leveraging a reference image, this model can accurately generate a new image of a person with the same expression as seen in the reference. This tool is ideal for applications in digital art, animation, and social media content creation.
Upload your image: Click the "Upload Image" button and select the image of the person you want to modify.
Upload reference image (Optional): Choose the reference image that demonstrates the desired expression. This is optional.
Click the "Generate" button. The model will process the images and create a new image with the person's face superimposed onto the reference image's expression. Use the advanced parameters to adjust the expression further if needed.
Rotate: Adjust pitch, yaw, and roll for optimal positioning.
Blink: Control the intensity of blinking.
Eyebrow: Manipulate eyebrow position for natural-looking expressions.
Wink: Create playful winks with adjustable intensity.
Pupil: Position the pupils for added realism.
Mouth: Shape your mouth for various expressions like "aaa," "eee," "woo," and smiles.
Begin with minor tweaks to see how each parameter affects the image. Gradually increase the adjustments to achieve the desired look.Combine different parameter adjustments to create more complex and nuanced expressions.
Artists can use the Expression Editor to create consistent facial expressions for characters in digital art and animations.
Easily generate reaction images and memes by changing expressions to fit different contexts.
Tailor marketing materials with personalized expressions to better connect with target audiences.
Enhance visual effects by adjusting actors’ expressions in post-production to achieve the desired emotional impact.