Getting Started
How do I create my first workflow?
How do I create my first workflow?
- Sign up at app.flowscale.ai
- Create a project from your dashboard
- Import a workflow by uploading a ComfyUI JSON file, or
- Start with a template from our pre-built collection
- Test the workflow in the cloud editor
- Deploy when you’re ready to create an API
What file formats can I import?
What file formats can I import?
- Workflow JSON: Standard ComfyUI export format
- PNG with metadata: Images with embedded workflow data
- Workflow templates: FlowScale-specific templates
Why can't I see my uploaded models?
Why can't I see my uploaded models?
- Supported:
.safetensors,.ckpt,.pth,.bin - Ensure files aren’t corrupted during upload
- Large models may take time to process
- Check the upload progress indicator
- Models appear in the appropriate category (Checkpoints, LoRAs, VAEs)
- Use the search function if you have many models
- Sometimes a browser refresh resolves display issues
How do I invite team members to my project?
How do I invite team members to my project?
- Go to project settings (gear icon in your project)
- Click “Team Members” tab
- Click “Invite Member”
- Enter email address and select role level
- Send invitation - they’ll receive an email to join
- Admin: Full project management access
- Owner: Complete control including billing
Workflow Issues
My workflow fails with 'Model not found' error
My workflow fails with 'Model not found' error
- Check if all required models are uploaded to your project
- Verify model names match exactly (case-sensitive)
- Ensure models are in the correct folders (checkpoints/, loras/, etc.)
- Use relative paths in your workflow, not absolute paths
- FlowScale automatically maps model paths to the correct locations
- Ensure model format is supported (.safetensors, .ckpt, .pth)
- Check if the model is compatible with your ComfyUI nodes
Workflow runs locally but fails in FlowScale
Workflow runs locally but fails in FlowScale
- Install required custom nodes in FlowScale
- Some nodes may not be compatible with cloud execution
- Ensure you’re using the same model versions
- Check if models were uploaded correctly
- Remove any hardcoded local file paths
- Use FlowScale’s file upload nodes instead
- Some workflows may require specific Python packages
- Contact support if you need additional dependencies installed
Workflow execution is very slow
Workflow execution is very slow
- Use higher-tier GPUs (L4, L40S, A100, H100, H200) for complex workflows
- T4 and A10G GPUs are suitable for simple to moderate workflows
- B200 for cutting-edge models requiring maximum performance
- Use FP16 precision when possible
- Consider smaller model variants for faster processing
- Minimize unnecessary nodes and connections
- Use batch processing for multiple similar operations
- Cache results when appropriate
- First execution may be slower due to model loading
- Subsequent runs are typically much faster
Out of memory errors
Out of memory errors
- Lower the batch_size parameter in your nodes
- Process images individually rather than in batches
- Use smaller image dimensions for development
- Upscale as a separate step if needed
- Use smaller models or quantized versions
- Unload models when not needed
- Switch to a GPU with more VRAM (L4, L40S, A100, H100, H200)
- Consider the memory requirements of your specific workflow
- H200 provides maximum VRAM (141GB) for the largest models
Deployment Issues
API deployment fails or times out
API deployment fails or times out
- Ensure workflow runs successfully in the editor first
- Check for any error nodes or warnings
- Verify all required inputs are properly configured
- Ensure output nodes are correctly set up
- Check if your workflow requires specific GPU types
- Verify sufficient resources are available
- Temporary network problems can cause deployment failures
- Try deploying again after a few minutes
API returns errors or unexpected results
API returns errors or unexpected results
- Check API request format matches documentation
- Verify required parameters are included
- Ensure data types match expectations (string, number, etc.)
- Confirm API key is correct and has proper permissions
- Check if API key has expired or been revoked
- Review API documentation for expected response structure
- Check if the workflow output nodes are configured correctly
- Ensure you’re not exceeding API rate limits
- Implement proper retry logic with exponential backoff
Playground UI not loading or functioning properly
Playground UI not loading or functioning properly
- Use a modern browser (Chrome, Firefox, Safari, Edge)
- Clear browser cache and cookies
- Disable browser extensions that might interfere
- Check your internet connection
- Try accessing from a different network
- Ensure uploaded files meet size and format requirements
- Check if all required fields are filled
- Large files may take time to upload and process
- Be patient with complex workflows
Account & Billing
How is usage calculated and billed?
How is usage calculated and billed?
- Charged per second of GPU usage
- Different rates for T4, A10G, L4, L40S, A100, H100, H200 and B200 GPUs
- $0.07 per hour is charged when pods are idle
- Model storage: $0.01 per model download
- Workflow storage: Included in base plan
- No additional charges for API requests
How do I upgrade my plan or add more resources?
How do I upgrade my plan or add more resources?
- Go to account settings (profile icon → Settings)
- Click “Billing & Usage” tab
- Select “Change Plan”
- Choose your new plan
- Confirm changes - billing adjusts automatically
- Higher GPU quotas and access to premium GPUs
- Increased storage limits
- Priority support and SLA guarantees
- Team collaboration features
Can I get a refund or credit for unused resources?
Can I get a refund or credit for unused resources?
- Unused compute time doesn’t expire
- Credits roll over month to month
- Can be used across all projects in your account
- Pro-rated refunds available for annual plans
- Monthly plans: No refunds, but you can downgrade
- Credits issued for platform outages or service issues
- Contact support for technical problems that prevent usage
Technical Support
How do I contact support?
How do I contact support?
- Click the ”?” icon in any FlowScale interface
- Submit tickets directly from the platform
- Include relevant project and workflow details
- Discord community for peer support
- GitHub discussions for feature requests
- Documentation feedback via GitHub issues
- Dedicated support for Pro and Enterprise plans
- Phone and video call support available
- Custom SLA agreements
- Free tier: 48-72 hours
- Pro tier: 24 hours
- Enterprise: 4-8 hours (or per SLA)
What information should I include in support requests?
What information should I include in support requests?
- Your FlowScale username/email
- Project name and ID
- Workflow name and version
- What you were trying to do
- What actually happened vs. expected behavior
- Error messages (exact text or screenshots)
- Steps to reproduce the issue
- Browser and version (for UI issues)
- API client and version (for API issues)
- Any relevant workflow or deployment configurations
- When the issue started
- Any recent changes you made
- Whether the issue is reproducible
How do I report a bug or request a feature?
How do I report a bug or request a feature?
- Use in-app support for bugs affecting your work
- GitHub issues for general platform bugs
- Include reproduction steps and expected vs. actual behavior
- Discord community for discussion and voting
- GitHub discussions for detailed proposals
- In-app feedback for workflow-specific needs
- Email security@flowscale.ai for security vulnerabilities
- Include detailed reproduction steps
- Do not post security issues publicly
- Check our public roadmap for planned features
- Vote on existing requests to show interest
- Engage with the community for feature discussion
Best Practices
How can I optimize my workflows for better performance?
How can I optimize my workflows for better performance?
- Minimize unnecessary nodes and connections
- Use efficient node types when available
- Avoid redundant operations
- Use appropriate model sizes for your use case
- Consider quantized models for faster inference
- Cache frequently used models
- Choose the right GPU type for your workload
- Monitor resource utilization and adjust as needed
- Use batch processing for multiple similar requests
- Test with smaller parameters during development
- Use lower resolution for initial testing
- Scale up gradually to production settings
What are the security best practices?
What are the security best practices?
- Keep API keys secure and never share them publicly
- Rotate keys regularly
- Use different keys for different environments (dev/prod)
- Use least-privilege principle for team member roles
- Regularly review and update team permissions
- Remove access for former team members promptly
- Don’t include sensitive data in workflows
- Use secure methods for handling user uploads
- Understand data retention and deletion policies
- Use HTTPS for all API communications
- Implement proper input validation