If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sd1.5-majicmix"
# Request payload
data = {
"prompt": "best quality, masterpiece, (photorealistic:1.4), 1boy, (50 years old:1.2) beard, dramatic lighting, hyper quality, intricate detail, ultra realistic, maximum detail, foreground focus, instagram, 8k, volumetric light, cinematic, octane render, uplight, no blur, 8k",
"negative_prompt": "nsfw, ng_deepnegative_v1_75t,badhandv4, (worst quality:2), (low quality:2), (normal quality:2), lowres,watermark, monochrome",
"scheduler": "dpmpp_2m",
"num_inference_steps": 40,
"guidance_scale": 7,
"samples": 1,
"seed": 720692316127,
"img_width": 512,
"img_height": 768,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 20,
max : 100
Scale for classifier-free guidance
min : 0.1,
max : 25
Number of samples to generate.
min : 1,
max : 4
Seed for image generation.
Width of the image.
Allowed values:
Height of the Image
Allowed values:
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
The MajicMix Model based on the robust Stable Diffusion 1.5 framework, this model specializes in creating photorealistic images. It's a powerful tool for professionals and enthusiasts alike, seeking to produce high-fidelity visuals with ease and precision. This model is fine-tuned to produce images that exhibit remarkable realism, capturing intricate details and textures that mimic real-life photography.
Photorealistic Outputs: Generates images with lifelike quality, making them almost indistinguishable from actual photographs.
High Detail and Texture: Excels in capturing fine details and realistic textures in the generated images.
Art and Illustration:Generate photorealistic art pieces for various creative projects.
Digital Marketing: Create stunning visuals for advertising and social media campaigns.
Digital Art Creation: Artists produce lifelike portraits and character designs.
Gaming Industry: Game developers can craft hyper-realistic characters, enhancing player immersion.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training