Build AI Image Generator in Next.js with Flux.1 Kontext

Build AI Image Generator in Next.js with Flux.1 Kontext in Black Forest Labs
Create stunning AI images in seconds. In this comprehensive tutorial, you'll learn how to build a professional AI image generation application using Next.js and Black Forest Labs' powerful FLUX API. We'll cover everything from setup to deployment, including advanced features like image editing and iterative refinement.
What You'll Build
By the end of this tutorial, you'll have a fully functional AI image generator that can:
- Generate images from text prompts using FLUX.1 Kontext models
- Edit existing images with simple text instructions
- Handle multiple aspect ratios and output formats
- Implement proper error handling and loading states
- Store and manage generated images securely
Prerequisites
Before we start, make sure you have:
- Node.js 18+ installed on your machine
- Basic knowledge of React and Next.js
- A Black Forest Labs account (we'll set this up together)
Step 1: Setting Up Your Black Forest Labs Account
Create Your Account
First, let's get your BFL account ready:
- Visit dashboard.bfl.ai and create an account
- Confirm your email address
- Log in to access your dashboard
Generate Your API Key
Once logged in, you will arrive on your BFL Dashboard - there you can create an API KEY by clicking the Add Key button in your dashboard.
Important Security Note: Keep your API key secure and never expose it in client-side code. Treat it like a password!
Step 2: Initialize Your Next.js Project
Let's create a new Next.js project with all the necessary dependencies:
# Create new Next.js project
npx create-next-app@latest flux-ai-generator
cd flux-ai-generator
# Install required dependencies
npm install axios lucide-react clsx tailwind-merge
npm install -D @types/node
# Install additional UI dependencies
npm install @radix-ui/react-dialog @radix-ui/react-select
Step 3: Environment Configuration
Create your environment file with the BFL API configuration:
# .env.local
NEXT_PUBLIC_BFL_API_URL=https://api.bfl.ai
BFL_API_KEY=your_api_key_here
NEXT_PUBLIC_MAX_CONCURRENT_REQUESTS=24
Environment Variables Explained:
NEXT_PUBLIC_BFL_API_URL
: Primary Global Endpoint - Recommended for most use cases with automatic failover between clusters for enhanced uptimeBFL_API_KEY
: Your secure API key (server-side only)NEXT_PUBLIC_MAX_CONCURRENT_REQUESTS
: Rate limit of 24 active tasks maximum
Step 4: Core API Service
Make a solid service to handle BFL API interactions:
// lib/bfl-service.ts
interface GenerationRequest {
prompt: string;
aspect_ratio?: string;
seed?: number;
output_format?: 'jpeg' | 'png';
safety_tolerance?: number;
}
interface EditingRequest extends GenerationRequest {
input_image: string; // Base64 encoded image
}
interface BFLResponse {
id: string;
polling_url?: string;
}
interface GenerationResult {
id: string;
status: 'Ready' | 'Pending' | 'Error';
result?: {
sample: string;
};
error?: string;
}
class BFLService {
private readonly apiKey: string;
private readonly baseUrl: string;
constructor() {
this.apiKey = process.env.BFL_API_KEY!;
this.baseUrl = process.env.NEXT_PUBLIC_BFL_API_URL!;
}
/**
* Generate image from text prompt using FLUX.1 Kontext
*/
async generateImage(request: GenerationRequest): Promise<BFLResponse> {
const response = await fetch(`${this.baseUrl}/v1/flux-kontext-pro`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
prompt: request.prompt,
aspect_ratio: request.aspect_ratio || '1:1',
seed: request.seed,
output_format: request.output_format || 'jpeg',
safety_tolerance: request.safety_tolerance || 2,
prompt_upsampling: true, // Recommended for T2I
}),
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || `HTTP ${response.status}`);
}
return response.json();
}
/**
* Edit existing image with text instructions
*/
async editImage(request: EditingRequest): Promise<BFLResponse> {
const response = await fetch(`${this.baseUrl}/v1/flux-kontext-pro`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
prompt: request.prompt,
input_image: request.input_image,
seed: request.seed,
output_format: request.output_format || 'jpeg',
safety_tolerance: request.safety_tolerance || 2,
}),
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || `HTTP ${response.status}`);
}
return response.json();
}
/**
* Poll for generation result using polling URL
*/
async pollResult(requestId: string, pollingUrl?: string): Promise<GenerationResult> {
// Use polling URL if provided (required for global endpoint)
const url = pollingUrl || `${this.baseUrl}/v1/get_result`;
const response = await fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ id: requestId }),
});
if (!response.ok) {
throw new Error(`Failed to poll result: ${response.status}`);
}
return response.json();
}
/**
* Download and store image from BFL delivery URL
*/
async downloadAndStoreImage(imageUrl: string): Promise<string> {
// Download the image
const response = await fetch(imageUrl);
if (!response.ok) {
throw new Error('Failed to download image');
}
const buffer = await response.arrayBuffer();
const base64 = Buffer.from(buffer).toString('base64');
// In a real app, you'd upload to your CDN/storage service
// For demo purposes, we'll return the base64 data URL
return `data:image/jpeg;base64,${base64}`;
}
}
export const bflService = new BFLService();
Step 5: API Routes
Create Next.js API routes to handle image generation securely:
// app/api/generate/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { bflService } from '@/lib/bfl-service';
export async function POST(request: NextRequest) {
try {
const { prompt, aspect_ratio, seed, output_format } = await request.json();
// Validate input
if (!prompt || typeof prompt !== 'string') {
return NextResponse.json(
{ error: 'Valid prompt is required' },
{ status: 400 }
);
}
// Submit generation request
const result = await bflService.generateImage({
prompt,
aspect_ratio,
seed,
output_format,
});
return NextResponse.json(result);
} catch (error) {
console.error('Generation error:', error);
return NextResponse.json(
{ error: error.message || 'Generation failed' },
{ status: 500 }
);
}
}
// app/api/edit/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { bflService } from '@/lib/bfl-service';
export async function POST(request: NextRequest) {
try {
const { prompt, input_image, seed, output_format } = await request.json();
if (!prompt || !input_image) {
return NextResponse.json(
{ error: 'Prompt and input image are required' },
{ status: 400 }
);
}
const result = await bflService.editImage({
prompt,
input_image,
seed,
output_format,
});
return NextResponse.json(result);
} catch (error) {
console.error('Edit error:', error);
return NextResponse.json(
{ error: error.message || 'Edit failed' },
{ status: 500 }
);
}
}
// app/api/poll/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { bflService } from '@/lib/bfl-service';
export async function POST(request: NextRequest) {
try {
const { id, polling_url } = await request.json();
if (!id) {
return NextResponse.json(
{ error: 'Request ID is required' },
{ status: 400 }
);
}
const result = await bflService.pollResult(id, polling_url);
// If image is ready, download and store it
if (result.status === 'Ready' && result.result?.sample) {
try {
const storedImage = await bflService.downloadAndStoreImage(
result.result.sample
);
result.result.sample = storedImage;
} catch (downloadError) {
console.error('Download error:', downloadError);
// Continue with original URL if download fails
}
}
return NextResponse.json(result);
} catch (error) {
console.error('Polling error:', error);
return NextResponse.json(
{ error: error.message || 'Polling failed' },
{ status: 500 }
);
}
}
Step 6: Custom Hooks for State Management
Create reusable hooks to manage generation and editing states:
// hooks/useImageGeneration.ts
import { useState, useCallback } from 'react';
interface GenerationOptions {
prompt: string;
aspectRatio?: string;
seed?: number;
outputFormat?: 'jpeg' | 'png';
}
interface GenerationState {
isGenerating: boolean;
progress: string;
result: string | null;
error: string | null;
}
export function useImageGeneration() {
const [state, setState] = useState<GenerationState>({
isGenerating: false,
progress: '',
result: null,
error: null,
});
const generateImage = useCallback(async (options: GenerationOptions) => {
setState({
isGenerating: true,
progress: 'Submitting request...',
result: null,
error: null,
});
try {
// Submit generation request
const response = await fetch('/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(options),
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.error || 'Generation failed');
}
const { id, polling_url } = await response.json();
// Poll for results
setState(prev => ({ ...prev, progress: 'Generating image...' }));
const result = await pollForResult(id, polling_url);
setState({
isGenerating: false,
progress: '',
result: result.result.sample,
error: null,
});
} catch (error) {
setState({
isGenerating: false,
progress: '',
result: null,
error: error.message,
});
}
}, []);
const pollForResult = async (id: string, pollingUrl?: string) => {
const maxAttempts = 60; // 5 minutes maximum
let attempts = 0;
while (attempts < maxAttempts) {
const response = await fetch('/api/poll', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ id, polling_url: pollingUrl }),
});
if (!response.ok) {
throw new Error('Polling failed');
}
const result = await response.json();
if (result.status === 'Ready') {
return result;
}
if (result.status === 'Error') {
throw new Error(result.error || 'Generation failed');
}
// Wait 5 seconds before next poll
await new Promise(resolve => setTimeout(resolve, 5000));
attempts++;
}
throw new Error('Generation timeout');
};
const reset = useCallback(() => {
setState({
isGenerating: false,
progress: '',
result: null,
error: null,
});
}, []);
return {
...state,
generateImage,
reset,
};
}
Step 7: Main Image Generator Component
Create the main component with a modern, user-easy interface:
// components/ImageGenerator.tsx
'use client';
import { useState } from 'react';
import { useImageGeneration } from '@/hooks/useImageGeneration';
import { AspectRatioSelector } from './AspectRatioSelector';
import { GeneratedImage } from './GeneratedImage';
import { LoadingSpinner } from './LoadingSpinner';
import { Wand2, Download, RefreshCw } from 'lucide-react';
const EXAMPLE_PROMPTS = [
"A serene mountain landscape at sunset with vibrant colors",
"A futuristic cityscape with flying cars and neon lights",
"A cute robot reading a book in a cozy library",
"Abstract art with flowing colors and geometric shapes",
];
export function ImageGenerator() {
const [prompt, setPrompt] = useState('');
const [aspectRatio, setAspectRatio] = useState('1:1');
const [seed, setSeed] = useState<number | undefined>();
const { isGenerating, progress, result, error, generateImage, reset } = useImageGeneration();
const handleGenerate = async () => {
if (!prompt.trim()) return;
await generateImage({
prompt: prompt.trim(),
aspectRatio,
seed,
outputFormat: 'jpeg',
});
};
const handleExampleClick = (examplePrompt: string) => {
setPrompt(examplePrompt);
};
const generateRandomSeed = () => {
setSeed(Math.floor(Math.random() * 1000000));
};
return (
<div className="max-w-4xl mx-auto p-6 space-y-8">
<div className="text-center space-y-4">
<h1 className="text-4xl font-bold bg-gradient-to-r from-purple-600 to-pink-600 bg-clip-text text-transparent">
AI Image Generator
</h1>
<p className="text-gray-600 text-lg">
Create stunning images with FLUX.1 Kontext - powered by Black Forest Labs
</p>
</div>
{/* Generation Form */}
<div className="bg-white rounded-xl shadow-lg p-6 space-y-6">
{/* Prompt Input */}
<div className="space-y-2">
<label className="block text-sm font-medium text-gray-700">
Describe your image
</label>
<textarea
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="A beautiful sunset over a mountain range..."
className="w-full p-4 border border-gray-300 rounded-lg focus:ring-2 focus:ring-purple-500 focus:border-transparent resize-none"
rows={4}
disabled={isGenerating}
/>
</div>
{/* Example Prompts */}
<div className="space-y-2">
<label className="block text-sm font-medium text-gray-700">
Try these examples:
</label>
<div className="flex flex-wrap gap-2">
{EXAMPLE_PROMPTS.map((example, index) => (
<button
key={index}
onClick={() => handleExampleClick(example)}
className="px-3 py-1 text-sm bg-gray-100 hover:bg-gray-200 rounded-full transition-colors"
disabled={isGenerating}
>
{example.slice(0, 30)}...
</button>
))}
</div>
</div>
{/* Options */}
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
<AspectRatioSelector
value={aspectRatio}
onChange={setAspectRatio}
disabled={isGenerating}
/>
<div className="space-y-2">
<label className="block text-sm font-medium text-gray-700">
Seed (optional)
</label>
<div className="flex gap-2">
<input
type="number"
value={seed || ''}
onChange={(e) => setSeed(e.target.value ? parseInt(e.target.value) : undefined)}
placeholder="Random"
className="flex-1 p-2 border border-gray-300 rounded-lg focus:ring-2 focus:ring-purple-500 focus:border-transparent"
disabled={isGenerating}
/>
<button
onClick={generateRandomSeed}
className="px-3 py-2 bg-gray-100 hover:bg-gray-200 rounded-lg transition-colors"
disabled={isGenerating}
>
<RefreshCw className="w-4 h-4" />
</button>
</div>
</div>
</div>
{/* Generate Button */}
<button
onClick={handleGenerate}
disabled={!prompt.trim() || isGenerating}
className="w-full py-4 px-6 bg-gradient-to-r from-purple-600 to-pink-600 text-white rounded-lg font-medium disabled:opacity-50 disabled:cursor-not-allowed hover:from-purple-700 hover:to-pink-700 transition-all duration-200 flex items-center justify-center gap-2"
>
{isGenerating ? (
<>
<LoadingSpinner />
{progress || 'Generating...'}
</>
) : (
<>
<Wand2 className="w-5 h-5" />
Generate Image
</>
)}
</button>
</div>
{/* Results */}
{error && (
<div className="bg-red-50 border border-red-200 rounded-lg p-4">
<div className="flex items-center gap-2">
<div className="w-2 h-2 bg-red-500 rounded-full"></div>
<p className="text-red-700 font-medium">Generation Error</p>
</div>
<p className="text-red-600 mt-1">{error}</p>
<button
onClick={reset}
className="mt-2 text-red-600 hover:text-red-700 underline"
>
Try again
</button>
</div>
)}
{result && (
<GeneratedImage
src={result}
alt={prompt}
prompt={prompt}
aspectRatio={aspectRatio}
seed={seed}
/>
)}
</div>
);
}
Step 8: Supporting Components
Create the supporting UI components:
// components/AspectRatioSelector.tsx
interface AspectRatioSelectorProps {
value: string;
onChange: (value: string) => void;
disabled?: boolean;
}
const ASPECT_RATIOS = [
{ value: '1:1', label: 'Square (1:1)', description: '1024×1024' },
{ value: '16:9', label: 'Landscape (16:9)', description: '1365×768' },
{ value: '9:16', label: 'Portrait (9:16)', description: '768×1365' },
{ value: '4:3', label: 'Standard (4:3)', description: '1182×886' },
{ value: '3:4', label: 'Portrait (3:4)', description: '886×1182' },
];
export function AspectRatioSelector({ value, onChange, disabled }: AspectRatioSelectorProps) {
return (
<div className="space-y-2">
<label className="block text-sm font-medium text-gray-700">
Aspect Ratio
</label>
<select
value={value}
onChange={(e) => onChange(e.target.value)}
disabled={disabled}
className="w-full p-2 border border-gray-300 rounded-lg focus:ring-2 focus:ring-purple-500 focus:border-transparent"
>
{ASPECT_RATIOS.map((ratio) => (
<option key={ratio.value} value={ratio.value}>
{ratio.label} - {ratio.description}
</option>
))}
</select>
</div>
);
}
// components/GeneratedImage.tsx
import { useState } from 'react';
import { Download, Copy, Edit } from 'lucide-react';
interface GeneratedImageProps {
src: string;
alt: string;
prompt: string;
aspectRatio: string;
seed?: number;
}
export function GeneratedImage({ src, alt, prompt, aspectRatio, seed }: GeneratedImageProps) {
const [isDownloading, setIsDownloading] = useState(false);
const handleDownload = async () => {
setIsDownloading(true);
try {
const response = await fetch(src);
const blob = await response.blob();
const url = URL.createObjectURL(blob);
const link = document.createElement('a');
link.href = url;
link.download = `ai-generated-${Date.now()}.jpg`;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
URL.revokeObjectURL(url);
} catch (error) {
console.error('Download failed:', error);
} finally {
setIsDownloading(false);
}
};
const copyPrompt = () => {
navigator.clipboard.writeText(prompt);
};
return (
<div className="bg-white rounded-xl shadow-lg overflow-hidden">
<div className="relative">
<img
src={src}
alt={alt}
className="w-full h-auto"
style={{ aspectRatio: aspectRatio.replace(':', '/') }}
/>
{/* Action Buttons */}
<div className="absolute top-4 right-4 flex gap-2">
<button
onClick={handleDownload}
disabled={isDownloading}
className="p-2 bg-black/50 hover:bg-black/70 text-white rounded-lg transition-colors"
title="Download Image"
>
<Download className="w-4 h-4" />
</button>
</div>
</div>
{/* Image Details */}
<div className="p-6 space-y-4">
<div>
<h3 className="font-medium text-gray-900 mb-2">Generated Image</h3>
<p className="text-gray-600 text-sm">{prompt}</p>
</div>
<div className="flex flex-wrap gap-4 text-sm text-gray-500">
<span>Aspect Ratio: {aspectRatio}</span>
{seed && <span>Seed: {seed}</span>}
</div>
<div className="flex gap-2">
<button
onClick={copyPrompt}
className="flex items-center gap-2 px-3 py-2 bg-gray-100 hover:bg-gray-200 rounded-lg text-sm transition-colors"
>
<Copy className="w-4 h-4" />
Copy Prompt
</button>
</div>
</div>
</div>
);
}
// components/LoadingSpinner.tsx
export function LoadingSpinner() {
return (
<div className="animate-spin rounded-full h-5 w-5 border-b-2 border-white"></div>
);
}
Step 9: Main Page Implementation
// app/page.tsx
import { ImageGenerator } from '@/components/ImageGenerator';
export default function Home() {
return (
<main className="min-h-screen bg-gradient-to-br from-purple-50 to-pink-50">
<div className="container mx-auto py-8">
<ImageGenerator />
</div>
</main>
);
}
Step 10: Advanced Features
Image Editing Functionality
Add image editing capabilities to your application:
// components/ImageEditor.tsx
'use client';
import { useState, useCallback } from 'react';
import { Upload, Edit } from 'lucide-react';
export function ImageEditor() {
const [originalImage, setOriginalImage] = useState<string | null>(null);
const [editPrompt, setEditPrompt] = useState('');
const [isEditing, setIsEditing] = useState(false);
const [editedImage, setEditedImage] = useState<string | null>(null);
const handleImageUpload = useCallback((event: React.ChangeEvent<HTMLInputElement>) => {
const file = event.target.files?.[0];
if (!file) return;
const reader = new FileReader();
reader.onload = (e) => {
const result = e.target?.result as string;
setOriginalImage(result);
setEditedImage(null);
};
reader.readAsDataURL(file);
}, []);
const handleEdit = async () => {
if (!originalImage || !editPrompt.trim()) return;
setIsEditing(true);
try {
// Convert data URL to base64
const base64 = originalImage.split(',')[1];
const response = await fetch('/api/edit', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
prompt: editPrompt.trim(),
input_image: base64,
}),
});
if (!response.ok) {
throw new Error('Edit failed');
}
const { id, polling_url } = await response.json();
// Poll for results (similar to generation)
const result = await pollForResult(id, polling_url);
setEditedImage(result.result.sample);
} catch (error) {
console.error('Edit error:', error);
} finally {
setIsEditing(false);
}
};
return (
<div className="max-w-4xl mx-auto p-6 space-y-8">
<div className="text-center">
<h2 className="text-3xl font-bold mb-4">Image Editor</h2>
<p className="text-gray-600">
Upload an image and describe how you want to edit it
</p>
</div>
{/* Upload Section */}
<div className="bg-white rounded-xl shadow-lg p-6">
<div className="space-y-4">
<label className="block">
<span className="sr-only">Choose image to edit</span>
<input
type="file"
accept="image/*"
onChange={handleImageUpload}
className="block w-full text-sm text-gray-500 file:mr-4 file:py-2 file:px-4 file:rounded-full file:border-0 file:text-sm file:font-semibold file:bg-purple-50 file:text-purple-700 hover:file:bg-purple-100"
/>
</label>
{originalImage && (
<div className="grid grid-cols-1 md:grid-cols-2 gap-6 mt-4">
<div>
<h3 className="font-medium mb-2">Original Image</h3>
<div className="relative aspect-square bg-gray-100 rounded-lg overflow-hidden">
<img
src={originalImage}
alt="Original uploaded image"
className="object-contain w-full h-full"
/>
</div>
</div>
<div>
<h3 className="font-medium mb-2">Edited Result</h3>
<div className="relative aspect-square bg-gray-100 rounded-lg overflow-hidden">
{isEditing ? (
<div className="absolute inset-0 flex items-center justify-center">
<div className="animate-spin rounded-full h-12 w-12 border-t-2 border-b-2 border-purple-500"></div>
</div>
) : editedImage ? (
<img
src={editedImage}
alt="AI edited image"
className="object-contain w-full h-full"
/>
) : (
<div className="absolute inset-0 flex items-center justify-center text-gray-400">
<p>Edit result will appear here</p>
</div>
)}
</div>
</div>
</div>
)}
<div className="mt-4 space-y-4">
<div>
<label className="block text-sm font-medium text-gray-700 mb-1">
Edit Instructions
</label>
<textarea
value={editPrompt}
onChange={(e) => setEditPrompt(e.target.value)}
placeholder="Describe how you want to edit the image..."
className="w-full px-3 py-2 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-purple-500 focus:border-purple-500"
rows={3}
disabled={!originalImage || isEditing}
/>
</div>
<button
onClick={handleEdit}
disabled={isEditing || !editPrompt.trim() || !originalImage}
className="w-full py-2 px-4 border border-transparent rounded-md shadow-sm text-sm font-medium text-white bg-purple-600 hover:bg-purple-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-500 disabled:bg-gray-300 disabled:cursor-not-allowed"
>
{isEditing ? 'Processing...' : 'Edit Image'}
</button>
</div>
</div>
</div>
</div>
);
}
Let's implement the handleImageUpload
and handleEdit
functions to make our editor fully functional:
// Inside the ImageEditor component
const handleImageUpload = (e) => {
const file = e.target.files?.[0];
if (!file) return;
// Reset state when new image is uploaded
setEditPrompt('');
setEditedImage(null);
const reader = new FileReader();
reader.onload = (event) => {
setOriginalImage(event.target?.result as string);
};
reader.readAsDataURL(file);
};
const handleEdit = async () => {
if (!originalImage || !editPrompt.trim()) return;
setIsEditing(true);
try {
// Call our API endpoint
const response = await fetch('/api/edit-image', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
image: originalImage,
prompt: editPrompt,
}),
});
if (!response.ok) {
throw new Error('Failed to edit image');
}
const data = await response.json();
// Poll for results
const pollingInterval = setInterval(async () => {
const pollResponse = await fetch('/api/poll', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
id: data.id,
pollingUrl: data.polling_url,
}),
});
const pollData = await pollResponse.json();
if (pollData.status === 'Ready' && pollData.result?.sample) {
setEditedImage(`data:image/jpeg;base64,${pollData.result.sample}`);
clearInterval(pollingInterval);
setIsEditing(false);
} else if (pollData.status === 'Error') {
throw new Error(pollData.error || 'Failed to process image');
}
}, 1000);
// Clean up interval after 5 minutes (timeout)
setTimeout(() => {
clearInterval(pollingInterval);
if (isEditing) {
setIsEditing(false);
alert('Image editing timed out. Please try again.');
}
}, 300000);
} catch (error) {
console.error('Error editing image:', error);
alert('Failed to edit image. Please try again.');
setIsEditing(false);
}
};
Today we're releasing FLUX.1 Kontext - a suite of generative flow matching models that allow you to generate and edit images. Unlike traditional text-to-image models, Kontext understands both text AND images as input, enabling true in-context generation and editing.