1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
const axios = require('axios');
const fs = require('fs');
const path = require('path');
// helper function to help you convert your local images into base64 format
async function toB64(imgPath) {
const data = fs.readFileSync(path.resolve(imgPath));
return Buffer.from(data).toString('base64');
}
const api_key = "YOUR API-KEY";
const url = "https://api.segmind.com/v1/gemini-1.5-flash";
const data = {
"messages": [
{
"role": "user",
"content": "tell me a joke on cats"
},
{
"role": "assistant",
"content": "here is a joke about cats..."
},
{
"role": "user",
"content": "now a joke on dogs"
}
]
};
(async function() {
try {
const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } });
console.log(response.data);
} catch (error) {
console.error('Error:', error.response.data);
}
})();
An array of objects containing the role and content
Could be "user", "assistant" or "system".
A string containing the user's query or the assistant's response.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Gemini 1.5 Flash is another exciting addition to the Gemini family of large language models by Google DeepMind. It's specifically designed for tasks that require speed and efficiency, making it a great choice for high-volume applications. Here's what makes it stand out:
Blazing Speed: As the name suggests, Flash prioritizes speed. It boasts sub-second average first-token latency, meaning it can start processing your requests almost instantly – ideal for real-time interactions or applications that require quick responses.
Cost-Effective: Compared to other models, Flash is lighter-weight and requires less processing power to run. This translates to significant cost savings, especially for large-scale deployments.
Long Context Window: Despite its focus on speed, Flash surprisingly retains the impressive long-context window of its sibling, Gemini 1.5 Pro. This allows it to process information up to one million tokens, making it suitable for tasks that require understanding complex contexts, even at high speeds.
Focus on Specific Tasks: While 1.5 Pro excels at a wide range of tasks, Flash is optimized for specific use cases like chat applications, where fast response times and efficient processing are crucial.
Gemini 1.5 Flash is a game-changer for developers and enterprises seeking a speedy and cost-effective large language model with exceptional long-context understanding. If you prioritize real-time interactions or have high-volume tasks, Flash is definitely worth considering.