Shakti Serverless GPUs FAQs

Quick answers to help you get started.

What frameworks are supported in the Serverless GPU environment?

Shakti’s serverless GPUs support popular machine learning and AI frameworks like TensorFlow, PyTorch, and CUDA, enabling seamless deployment of models across multiple libraries.

Can I deploy custom AI models on Shakti’s Serverless GPU platform?

Yes, Shakti supports bring-your-own-container (BYOC) functionality, allowing you to deploy custom models using containerized environments. You can upload your Docker container and run your model in our serverless GPU infrastructure.

What is the pricing model for Serverless GPUs?

Serverless GPUs are billed based on the duration of GPU usage per second. You are charged for the actual time your model is running, ensuring cost efficiency and scalability without over-provisioning.

How do you handle auto-scaling for Serverless GPUs?

Shakti’s serverless GPUs auto-scale dynamically based on the number of requests, latency or throughput. This ensures that resources are efficiently utilized without manual intervention.

How does Shakti ensure high availability and fault tolerance for Serverless GPU tasks?

Shakti’s serverless GPU infrastructure is built with high availability in mind, featuring fault-tolerant execution, automatic job restarts, and multi-region redundancy to minimize downtime.