Troubleshooting
Common Issues
1. Authentication Error:
- Verify your API key is correct
- Ensure the API key is properly formatted in the authorization header
- Check for any whitespace in the API key
- Confirm your account has proper permissions
2. Model Not Found:
- Double-check the model name spelling (case-sensitive)
- Verify the model is available in relaxAI
- Ensure you’re using the fully qualified model name
- Try using another available model as a test
3. Context Window Exceeded:
- Reduce input tokens by summarizing or chunking
- Split requests into smaller, manageable chunks
- Implement proper token counting to stay within limits
- Remember: The context window of
DeepSeek-R1-0528
is 65k tokens. For problems requiring large context windows, please useLlama-4-Maverick-17B-128E
(1M token context window)
4. Slow Response Times:
DeepSeek-R1-0528
is a reasoning model and thus has longer response times. For fast responses, tryLlama-4-Maverick-17B-128E
.- Optimize your prompts to be more concise
- Implement proper timeout handling in your application
- Check network connectivity and latency
5. Integration-Specific Issues:
- Ensure you’re using the correct API base URL format
- Verify all required parameters are properly configured
- Check integration-specific documentation for any special requirements
- Look for error logs in the specific tool or framework
Debug Tips
For debugging API calls, use curl to test the API directly:
curl https://api.relax.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer RELAX_API_KEY" \ -d '{ "model": "Llama-4-Maverick-17B-128E", "messages": [{"role": "user", "content": "Hello world"}] }'
Advanced Troubleshooting
1. Enable Verbose Logging:
- Most integrations allow enabling debug or verbose logging:
- LangChain:
langchain.debug = True
- Python requests: Use
verbose=True
or logging configurations - Node.js: Set
DEBUG=openai*
environment variable
2. Monitoring Tools:
- Use API monitoring tools like Postman or Insomnia for request inspection
- Implement logging middleware to capture request/response details
- Add timing metrics to identify performance bottlenecks
3. Rate Limiting Issues:
- Implement proper backoff and retry logic
- Monitor rate limit headers in API responses
- Add request queuing for high-volume applications