AI-Powered Content Generation: DeepSeek for Product Descriptions
This article explains how we use DeepSeek AI to generate product descriptions, translate content, and create marketing copy at scale.
The Problem: Scaling Content Creation
We have 5,000+ product configurations that need:
-
Product descriptions: 200-300 words per product
-
Meta descriptions: 150-160 characters for SEO
-
Feature highlights: Bullet points for key features
-
Marketing copy: Compelling sales language
Writing manually:
-
5,000 products × 30 minutes per product = 2,500 hours (104 days)
-
Consistency: Ensure all descriptions follow brand voice
-
Updates: Rewrite when products change
-
Translations: Multiply by 10 languages = 25,000 hours
This doesn't scale. We need AI automation.
The Solution: DeepSeek AI
We use DeepSeek AI model to generate content:
-
Model: DeepSeek-V3 (671B parameters)
-
API: deepseek.com with Together.ai fallback
-
Cost: $0.27 per million input tokens, $1.10 per million output tokens
-
Speed: ~50 tokens/second (10 seconds per description)
This enables generating 5,000 descriptions in ~14 hours at ~$50 cost.
Unified AI Generation Interface
We built a unified interface supporting multiple providers:
from app.shared.ai_generation import generate_content
description = generate_content(
prompt="Write a product description for Treo N100 Mini PC",
provider="deepseek",
max_tokens=800,
temperature=0.7,
system_prompt="You are a technical writer..."
)
Supported Providers:
-
DeepSeek: Primary provider (deepseek.com)
-
Gemini: Fallback provider (Google)
-
Together.ai: DeepSeek fallback (if deepseek.com fails)
DeepSeek API Integration
Direct API Call
We call deepseek.com API directly:
response = requests.post(
"https://api.deepseek.com/v1/chat/completions",
headers={
# ... (implementation details omitted)
Fallback to Together.ai
If deepseek.com fails, we fall back to Together.ai:
response = requests.post(
"https://api.together.xyz/v1/chat/completions",
headers={
# ... (implementation details omitted)
This ensures high availability even if one provider is down.
System Prompt Design for Caching
DeepSeek supports prompt caching to reduce costs:
-
System prompt: Cached (reused across requests)
-
User prompt: Not cached (unique per request)
We design system prompts to be reusable:
system_prompt = """You are a technical writer for Thinvent, a manufacturer of mini PCs, thin clients, and industrial computers.
Write product descriptions that:
# ... (implementation details omitted)
This system prompt is cached and reused for all 5,000 products, saving ~$40 per run.
Cache Hit Logging
We log cache hits to monitor savings:
usage = result.get("usage", {})
cache_hits = usage.get("prompt_cache_hit_tokens", 0)
cache_misses = usage.get("prompt_cache_miss_tokens", 0)
if cache_hits > 0:
logging.info(f"DeepSeek cache HIT: {cache_hits} tokens (miss: {cache_misses})")
Example output:
DeepSeek cache HIT: 1,200 tokens (miss: 50)
DeepSeek cache HIT: 1,200 tokens (miss: 48)
DeepSeek cache HIT: 1,200 tokens (miss: 52)
This shows the system prompt (1,200 tokens) is cached, while user prompts (50 tokens) are not.
Temperature Settings
We use different temperatures for different use cases:
Product Descriptions (0.7)
Purpose: Balanced creativity and consistency
description = generate_content(
prompt=f"Write a description for {product_name}",
temperature=0.7
)
Result: Natural-sounding descriptions with some variation
Translations (1.3)
Purpose: More creative, natural translations
translation = generate_content(
prompt=f"Translate to {lang}: {text}",
temperature=1.3
)
Result: Idiomatic translations that sound native
Technical Specs (0.3)
Purpose: Factual, consistent output
specs = generate_content(
prompt=f"List specifications for {product_name}",
temperature=0.3
)
Result: Consistent, accurate specifications
Marketing Copy (1.0)
Purpose: Creative, engaging content
copy = generate_content(
prompt=f"Write marketing copy for {product_name}",
temperature=1.0
)
Result: Compelling, varied marketing language
Use Cases
1. Product Descriptions
-
Input: Product SKU and features
-
Prompt:
Write a product description for:
- Name: Treo N100 Mini PC
- Processor: Intel N100 (4 cores, 3.4GHz)
- RAM: 8GB DDR4
- Storage: 256GB SSD
- Ports: 2× HDMI, 4× USB 3.0, Gigabit Ethernet
- OS: Windows 11 Pro
Output:
The Thinvent Treo N100 Mini PC is a compact powerhouse designed for business and industrial applications. Powered by the Intel N100 processor with 4 cores running at 3.4GHz, it delivers reliable performance for everyday computing tasks.
With 8GB DDR4 RAM and a 256GB SSD, the Treo provides smooth multitasking and fast boot times. The dual HDMI ports support multi-monitor setups, while 4 USB 3.0 ports and Gigabit Ethernet ensure seamless connectivity.
Pre-installed with Windows 11 Pro, the Treo is ready for business use out of the box. Its fanless design ensures silent operation, making it ideal for office environments, digital signage, and industrial automation.
Backed by Thinvent's 3-year warranty and certified to BIS, RoHS, and ISO standards, the Treo N100 Mini PC is built to last.
2. Content Translation
-
Input: English text and target language
-
Prompt:
Translate to Hindi:
"The Thinvent Treo N100 Mini PC is a compact powerhouse designed for business and industrial applications."
Output:
Thinvent Treo N100 मिनी पीसी एक कॉम्पैक्ट पावरहाउस है जो व्यावसायिक और औद्योगिक अनुप्रयोगों के लिए डिज़ाइन किया गया है।
3. Meta Descriptions
-
Input: Product name and key features
-
Prompt:
Write a 150-character meta description for:
Treo N100 Mini PC with Intel N100, 8GB RAM, 256GB SSD, Windows 11 Pro
Output:
Compact Treo N100 Mini PC with Intel N100, 8GB RAM, 256GB SSD, Windows 11 Pro. Perfect for business and industrial use. 3-year warranty.
4. Feature Highlights
-
Input: Product features
-
Prompt:
Write 5 bullet points highlighting key features of:
Treo N100 Mini PC
Output:
• Intel N100 processor (4 cores, 3.4GHz) for reliable performance
• 8GB DDR4 RAM and 256GB SSD for smooth multitasking
• Dual HDMI ports for multi-monitor setups
• Fanless design for silent operation
• Windows 11 Pro pre-installed, ready for business use
Batch Processing
We process content in batches for efficiency:
from app.web.models_product_description import ProductDescription
# Queue descriptions for generation
# ... (implementation details omitted)
This allows processing thousands of descriptions in the background.
Error Handling
We handle API failures gracefully:
try:
content = generate_content(prompt, provider="deepseek")
if content:
# ... (implementation details omitted)
If generation fails, we: 1. Log the error 2. Return None (caller handles fallback) 3. Retry later (for queued items)
Cost Optimization
We optimize costs through:
1. Prompt Caching
-
System prompt: 1,200 tokens × 5,000 products = 6M tokens
-
Without caching: $0.27 × 6 = $1.62
-
With caching: $0.27 × 0.006 = $0.0016 (99.9% savings)
2. Batch Processing
Process multiple items in one API call:
prompt = "Write descriptions for:\n"
for sku in batch_skus:
prompt += f"- {sku}\n"
This reduces API overhead and increases throughput.
3. Temperature Tuning
Lower temperature = fewer tokens generated:
-
Temperature 0.3: ~200 tokens per description
-
Temperature 0.7: ~250 tokens per description
-
Temperature 1.3: ~300 tokens per description
Use lower temperature when possible.
Performance Characteristics
Generation Speed:
-
Single description: ~10 seconds
-
Batch of 10: ~30 seconds (3 seconds per description)
-
5,000 descriptions: ~14 hours (batch processing)
Cost:
-
Input tokens: 1,200 (system) + 50 (user) = 1,250 tokens
-
Output tokens: 250 tokens
-
Cost per description: $0.27 × 0.00125 + $1.10 × 0.00025 = $0.00061
-
5,000 descriptions: $3.05
With caching:
-
Input tokens: 6 (cached) + 50 (user) = 56 tokens
-
Cost per description: $0.27 × 0.000056 + $1.10 × 0.00025 = $0.00029
-
5,000 descriptions: $1.45 (52% savings)
Integration with SEO Pipeline
AI-generated content integrates with the SEO pipeline:
Step 9: Migrate Descriptions
The SEO pipeline generates descriptions for all products:
for sku in product_skus:
description = generate_content(
prompt=f"Write description for {sku}",
# ... (implementation details omitted)
See SEO Pipeline Overview for details.
Translation Queue
Missing translations are queued and processed with AI:
for item in translation_queue:
translation = generate_content(
prompt=f"Translate to {item['lang']}: {item['text']}",
provider="deepseek",
temperature=1.3
)
translation_manager.set(item['text'], item['lang'], translation)
See Translation System for details.
References
AI Services
-
DeepSeek - Official website
-
DeepSeek API Documentation - API docs
-
Together.ai - Fallback provider
Technical Concepts
-
Prompt Caching - DeepSeek docs
-
Temperature Sampling - Wikipedia
Related Articles
-
Translation System - How we translate content
-
SEO Pipeline Overview - Complete pipeline architecture
-
System Prompt Design - Designing reusable prompts
Summary
We use DeepSeek AI to generate content at scale:
-
Model: DeepSeek-V3 (671B parameters)
-
API: deepseek.com with Together.ai fallback
-
Use Cases:
-
Product descriptions (0.7 temperature)
-
Translations (1.3 temperature)
-
Technical specs (0.3 temperature)
-
Marketing copy (1.0 temperature)
Optimization:
-
Prompt caching (99.9% savings on system prompt)
-
Batch processing (3× faster)
-
Temperature tuning (20% fewer tokens)
Performance:
-
5,000 descriptions in ~14 hours
-
Cost: $1.45 with caching (52% savings)
-
Speed: ~3 seconds per description (batched)
Integration:
-
SEO pipeline (Step 9)
-
Translation queue (background worker)
-
Product description table (DynamoDB)
This AI-powered approach enables us to generate and maintain thousands of product descriptions across multiple languages without manual effort.