Transparent, Scalable Pricing for your AI Workloads
From development to deployment, our pricing adapts to your needs

AI Workspace
Tailored for your AI Workloads, With Flexible Instances.
AI Lab
Designed for your educational needs, purchase as per daily lab requirements.
Bare Metal
Raw power at a predictable cost—optimized for your high-performance AI workloads.
SLURM Clusters
Scalable SLURM Clusters for High-Performance Computing.
Kubernetes Clusters
Kubernetes Clusters Built for Performance, Flexibility, and Growth.
AI Endpoints
Deploy AI models effortlessly with cost-effective, per-usage pricing.
Serverless GPUs
Maximize efficiency with no upfront costs—only pay for what your AI workloads consume.
Microsoft Azure AI Services
Experience Azure AI on Yotta Shakti Cloud with high-performance AI solutions.
Yotta Sarvam AI Services
Yotta Sarvam AI delivers scalable models, seamless deployment, and flexible pricing.
Add-On Services
Explore our add-on services to enhance performance.
AI Workspace Pricing
Shakti Cloud AI Workspace delivers a seamless experience with flexible configurations, offering Virtual Machine to meet diverse AI workload demands.
VM - 1 x H100
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 1 x Nvidia H100 SXM (80 GB)
-
vCPU : 24
-
RAM : 240 GB
-
Root Disk : 100 GB High Speed Storage
-
Unlimited Data Ingress & Egress
VM - 2 x H100
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 2 x Nvidia H100 SXM (160 GB)
-
GPU Interconnect : NVLink
-
vCPU : 52
-
RAM : 480 GB
-
Root Disk : 100 GB High Speed Storage
-
Unlimited Data Ingress & Egress
VM - 4 x H100
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 4 x Nvidia H100 SXM (320 GB)
-
GPU Interconnect : NVLink
-
vCPU : 102
-
RAM : 960 GB
-
Root Disk : 100 GB High Speed Storage
-
Unlimited Data Ingress & Egress
VM - 1 x L40S
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 1 x Nvidia L40S (48 GB)
-
vCPU : 28
-
RAM : 240 GB
-
Root Disk : 100 GB High Speed Storage
-
Unlimited Data Ingress & Egress
VM - 2 x L40S
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 2 x Nvidia L40S (96 GB)
-
vCPU : 56
-
RAM : 480 GB
-
Root Disk : 100 GB High Speed Storage
-
Unlimited Data Ingress & Egress
VM - 1 x H100
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 1 x Nvidia H100 SXM (80 GB)
-
vCPU : 24
-
RAM : 240 GB
-
Root Disk : 100 GB High Speed Storage
-
Object Storage : 1.5 TB
-
Unlimited Data Ingress & Egress
VM - 2 x H100
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 2 x Nvidia H100 SXM (160 GB)
-
GPU Interconnect : NVLink
-
vCPU : 52
-
RAM : 480 GB
-
Root Disk : 100 GB High Speed Storage
-
Object Storage : 1.5 TB
-
Unlimited Data Ingress & Egress
VM - 4 x H100
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 4 x Nvidia H100 SXM (320 GB)
-
GPU Interconnect : NVLink
-
vCPU : 102
-
RAM : 960 GB
-
Root Disk : 100 GB High Speed Storage
-
Object Storage : 1.5 TB
-
Unlimited Data Ingress & Egress
VM - 1 x L40S
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 1 x Nvidia L40S (48 GB)
-
vCPU : 28
-
RAM : 240 GB
-
Root Disk : 100 GB High Speed Storage
-
Object Storage : 300 GB
-
Unlimited Data Ingress & Egress
VM - 2 x L40S
-
Shakti Cloud AI Workspace-as-a-Service - Virtual Machine
-
GPU : 2 x Nvidia L40S (96 GB)
-
vCPU : 56
-
RAM : 480 GB
-
Root Disk : 100 GB High Speed Storage
-
Object Storage : 300 GB
-
Unlimited Data Ingress & Egress
Shakti Cloud Kubernetes Management Platform fee
-
Shakti Cloud Kubernetes Management Platform with bundled Head Nodes with 3 Head Nodes (CPU Node) and 1 Login Node(CPU Node)
Shakti NVAIE – NVIDIA AI Enterprise Licensing + Lepton Platform
-
NVIDIA AI Enterprise is a full-stack, cloud-native platform designed to supercharge data science workflows and simplify the development, scaling, and deployment of cutting-edge AI applications, including the latest in generative AI.
-
Lepton is a high-performance AI orchestration platform that accelerates model development, training, and deployment with full ML tooling, RBAC, benchmarking, and observability.
Shakti Cloud NVCF Serverless Platform Fee
-
Shakti Cloud NVCF Platform for the Serverless GPU functionality
AI Lab Platform Fee
-
User Access: 10 end-user accounts for development + 2 admin accounts for lab management
-
Effortless Container Management: Deploy pre-configured ML/DL environments instantly
-
Flexible GPU Profiles: Slice GPUs for multiple users or combine them for high-performance tasks
-
Built-in IDEs: Work seamlessly with Jupyter & VSCode
-
Storage: 250 GB included—allocate per user as needs
-
Unlimited Data Transfer: No limits on ingress or egress
Shakti Cloud SLURM Management Platform fee
-
Shakti Cloud SLURM Management Platform; includes 1 Head Node (CPU Node) and 1 Login Node(CPU Node)
Shakti Cloud SLURM Management Platform (High Availability) fee
-
Shakti Cloud SLURM Management Platform in High Availability; includes 2 Head Nodes (CPU Node in High Availability) and 1 Login Node(CPU Node)
Shakti Cloud InfiniBand Interconnectivity
-
3.2 Tb/s Interconnect – Ultra-low latency, high-bandwidth fabric designed for demanding AI and HPC clusters.
AI Lab Pricing
Shakti AI Lab provides an instant, hassle-free AI development environment with pre-loaded libraries, available for users to begin developing immediately.
Workstation Name |
Description |
GPU Memory |
Monthly price |
---|---|---|---|
AI Lab Workstation - 10GB - H100 |
|
10GB - H100 |
₹ 25,000 per workstation |
AI Lab Workstation – 20GB - H100 |
|
20GB - H100 |
₹ 50,000 per workstation |
AI Lab Workstation – 40GB - H100 |
|
40GB - H100 |
₹ 1,00,000 per workstation |
AI Lab Workstation – 80GB - H100 |
|
80GB - H100 |
₹ 1,75,000 per workstation |
AI Lab Workstation – 160GB - H100 |
|
160GB - H100 |
₹ 3,50,000 per workstation |
AI Lab Workstation - 320GB - H100 |
|
320GB - H100 |
₹ 70,00,000 per workstation |
AI Lab Workstation – 640GB - H100 |
|
640GB - H100 |
₹ 1,40,00,000 per workstation |
AI Lab Workstation – 48GB - L40s |
|
48GB - L40s |
₹ 74,000 per workstation |
AI Lab Workstation - 96GB - L40s |
|
96GB - L40s |
₹ 1,48,000 per workstation |
AI Lab Workstation – 192GB - L40s |
|
192GB - L40s |
₹ 2,96,000 per workstation |
Bare Metal Pricing
Choose from our powerful, dedicated Bare Metal servers, optimized for performance-intensive workloads and tailored to meet your AI training needs.
Plan Description |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
48 Months |
---|---|---|---|---|---|---|
Bare Metal 8 x HGX H100
|
₹ 325 |
₹ 313 |
₹ 243 |
₹ 234 |
₹ 226 |
₹ 217 |
Plan Description |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
48 Months |
---|---|---|---|---|---|---|
Bare Metal 4 x L40S
|
₹ 182 |
₹ 161 |
₹ 147 |
₹ 145 |
₹ 143 |
₹ 141 |
Plan Description |
On-Demand |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
---|---|---|---|---|---|---|
BARE METAL 8 x HGX B200
|
₹ 527 |
₹ 492 |
₹ 466 |
₹ 439 |
₹ 351 |
₹ 307 |
SLURM Clusters Pricing
Run and scale demanding AI, ML, and HPC workloads with precision using our SLURM-managed infrastructure.
Plan Description |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
48 Months |
---|---|---|---|---|---|---|
SLURM Cluster Pricing for HGX H100
|
₹ 357 |
₹ 345 |
₹ 267 |
₹ 257 |
₹ 249 |
₹ 239 |
Plan Description |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
48 Months |
---|---|---|---|---|---|---|
SLURM Cluster Pricing for L40S
|
₹ 200 |
₹ 177 |
₹ 162 |
₹ 160 |
₹ 158 |
₹ 156 |
Kubernetes Clusters Pricing
Leverage Kubernetes orchestration to efficiently deploy and scale your training and inferencing workloads.
Plan Description |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
48 Months |
---|---|---|---|---|---|---|
Kubernetes Cluster Pricing for HGX H100
|
₹ 373 |
₹ 360 |
₹ 279 |
₹ 269 |
₹ 260 |
₹ 250 |
Plan Description |
Monthly |
6 Months |
12 Months |
24 Months |
36 Months |
48 Months |
---|---|---|---|---|---|---|
Kubernetes Cluster Pricing for L40S
|
₹ 209 |
₹ 185 |
₹ 169 |
₹ 167 |
₹ 165 |
₹ 163 |
AI Endpoint Pricing
Select the right endpoint for your project and enjoy transparent, token-based pricing designed to meet your unique AI needs.
Meta/Llama3-8b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3 8B inference through OpenAI compatible APIs
Llama-3.1-8b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 8B inference through OpenAI compatible APIs
Meta/Llama3-70b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3 70B inference through OpenAI compatible APIs
Llama-3.1-405b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 405B inference through OpenAI compatible APIs
Llama-3.1-70b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 70B inference through OpenAI compatible APIs
Mixtral-8x7B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Mixtral-8x7B-Instruct-v0.1 inference through OpenAI compatible APIs
Llama-3.1-8b-base
-
NVIDIA NIM for GPU accelerated Llama 3.1 8B inference through OpenAI compatible APIs
Mixtral-8x22B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Mixtral-8x22B-Instruct-v0.1 inference through OpenAI compatible APIs
meta-llama-2-13b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 13B inference through OpenAI compatible APIs
meta-llama-2-70b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 70B inference through OpenAI compatible APIs
Llama-3-Taiwan-70B-Instruct
-
NVIDIA NIM for GPU accelerated Llama-3-Taiwan-70B-Instruct inference through OpenAI compatible APIs
nemotron-4-340b-instruct
-
NVIDIA NIM for GPU accelerated Nemotron-4-340B-Instruct inference through OpenAI compatible APIs
meta-llama-2-7b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 7B inference through OpenAI compatible APIs
Mistral-7B-Instruct-v0.3
-
NVIDIA NIM for GPU accelerated Mistral-7B-Instruct-v0.3 inference through OpenAI compatible APIs
Nemotron-4-340B-Reward
-
NVIDIA NIM for GPU accelerated Nemotron-4-340B-Reward inference through OpenAI compatible APIs
Llama-3-Swallow-70B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Llama-3-Swalow-70B-Instruct-v0.1 inference through OpenAI compatible APIs
Meta/Llama3-8b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3 8B inference through OpenAI compatible APIs
Llama-3.1-8b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 8B inference through OpenAI compatible APIs
Meta/Llama3-70b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3 70B inference through OpenAI compatible APIs
Llama-3.1-405b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 405B inference through OpenAI compatible APIs
Llama-3.1-70b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 70B inference through OpenAI compatible APIs
Mixtral-8x7B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Mixtral-8x7B-Instruct-v0.1 inference through OpenAI compatible APIs
Mixtral-8x22B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Mixtral-8x22B-Instruct-v0.1 inference through OpenAI compatible APIs
meta-llama-2-13b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 13B inference through OpenAI compatible APIs
meta-llama-2-70b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 70B inference through OpenAI compatible APIs
Llama-3-Taiwan-70B-Instruct
-
NVIDIA NIM for GPU accelerated Llama-3-Taiwan-70B-Instruct inference through OpenAI compatible APIs
nemotron-4-340b-instruct
-
NVIDIA NIM for GPU accelerated Nemotron-4-340B-Instruct inference through OpenAI compatible APIs
meta-llama-2-7b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 7B inference through OpenAI compatible APIs
Mistral-7B-Instruct-v0.3
-
NVIDIA NIM for GPU accelerated Mistral-7B-Instruct-v0.3 inference through OpenAI compatible APIs
Nemotron-4-340B-Reward
-
NVIDIA NIM for GPU accelerated Nemotron-4-340B-Reward inference through OpenAI compatible APIs
Llama-3-Swallow-70B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Llama-3-Swalow-70B-Instruct-v0.1 inference through OpenAI compatible APIs
Meta/Llama3-8b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3 8B inference through OpenAI compatible APIs
Llama-3.1-8b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 8B inference through OpenAI compatible APIs
Meta/Llama3-70b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3 70B inference through OpenAI compatible APIs
Llama-3.1-405b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 405B inference through OpenAI compatible APIs
Llama-3.1-70b-instruct
-
NVIDIA NIM for GPU accelerated Llama 3.1 70B inference through OpenAI compatible APIs
Mixtral-8x7B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Mixtral-8x7B-Instruct-v0.1 inference through OpenAI compatible APIs
Mixtral-8x22B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Mixtral-8x22B-Instruct-v0.1 inference through OpenAI compatible APIs
meta-llama-2-13b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 13B inference through OpenAI compatible APIs
meta-llama-2-70b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 70B inference through OpenAI compatible APIs
Llama-3-Taiwan-70B-Instruct
-
NVIDIA NIM for GPU accelerated Llama-3-Taiwan-70B-Instruct inference through OpenAI compatible APIs
nemotron-4-340b-instruct
-
NVIDIA NIM for GPU accelerated Nemotron-4-340B-Instruct inference through OpenAI compatible APIs
meta-llama-2-7b-chat
-
NVIDIA NIM for GPU accelerated Llama 2 7B inference through OpenAI compatible APIs
Mistral-7B-Instruct-v0.3
-
NVIDIA NIM for GPU accelerated Mistral-7B-Instruct-v0.3 inference through OpenAI compatible APIs
Nemotron-4-340B-Reward
-
NVIDIA NIM for GPU accelerated Nemotron-4-340B-Reward inference through OpenAI compatible APIs
NVIDIA Retrieval QA E5 Embedding v5
-
NVIDIA NIM for GPU accelerated NVIDIA Retrieval QA E5 Embedding v5 inference
NVIDIA Retrieval QA Mistral 4B Reranking v3
-
NVIDIA NIM for GPU accelerated NVIDIA Retrieval QA Mistral 4B Reranking v3 inference
NVIDIA Retrieval QA Mistral 7B Embedding v2
-
NVIDIA NIM for GPU accelerated NVIDIA Retrieval QA Mistral 7B Embedding v2 inference
Llama-3-Swallow-70B-Instruct-v0.1
-
NVIDIA NIM for GPU accelerated Llama-3-Swalow-70B-Instruct-v0.1 inference through OpenAI compatible APIs
snowflake Arctic Embed Large Embedding
-
NVIDIA NIM for GPU accelerated Snowflake Arctic Embed Large Embedding inference
MolMIM
-
MolMIM is a transformer-based model developed by NVIDIA for controlled small molecule generation.
DiffDock
-
Diffdock predicts the 3D structure of the interaction between a molecule and a protein.
ProteinMPNN
-
Predicts amino acid sequences from 3D structure of proteins.
AlphaFold2
-
A widely used model for predicting the 3D structures of proteins from their amino acid sequences.
ASR Parakeet CTC Riva 1.1b
-
RIVA ASR NIM delivers accurate English speech-to-text transcription and enables easy-to-use optimized ASR inference for large scale deployments.
TTS FastPitch HifiGAN Riva
-
RIVA TTS NIM provide easy access to state-of-the-art text to speech models, capable of synthesizing English speech from text
NMT Megatron Riva 1b
-
Riva NMT NIM provide easy access to state-of-the-art neural machine translation (NMT) models, capable of translating text from one language to another with exceptional accuracy.
Serverless GPUs Pricing
Leverage on-demand compute power with pay-as-you-go pricing, perfect for scaling AI applications seamlessly.
Shakti Cloud Serverless 1 x NVIDIA H100 80GB Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 80 GB H100
-
CPU: 14 Cores
-
RAM: 256GB
-
Storage: 500GB
-
Unlimited ingress and egress.
Shakti Cloud Serverless with 1 x 40 GB H100 Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 40 GB H100
-
CPU: 7 Cores
-
RAM: 128GB RAM
-
Storage: 250GB
-
Unlimited ingress and egress.
Shakti Cloud Serverless with 1 x 48 GB L40S Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 48 GB L40S
-
CPU: 16 Cores
-
RAM: 256GB
-
Storage: 500GB
-
Unlimited ingress and egress
Shakti Cloud Serverless with - 1 x 16 GB L40S Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 16 GB L40S
-
CPU: 4 Cores
-
RAM: 64GB
-
Storage: 128GB
-
Unlimited ingress and egress
Shakti Cloud Serverless with 1 x 24 GB L40S Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 24 GB L40S
-
CPU: 8 Cores
-
RAM: 128GB
-
Storage: 258GB
-
Unlimited ingress and egress
Shakti Cloud Serverless 1 x NVIDIA H100 80GB Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 80 GB H100
-
CPU: 14 Cores
-
RAM: 256GB
-
Storage: 500GB
-
Unlimited ingress and egress.
Shakti Cloud Serverless with 1 x 40 GB H100 Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 40 GB H100
-
CPU: 7 Cores
-
RAM: 128GB RAM
-
Storage: 250GB
-
Unlimited ingress and egress.
Shakti Cloud Serverless with 1 x 48 GB L40S Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 48 GB L40S
-
CPU: 16 Cores
-
RAM: 256GB
-
Storage: 500GB
-
Unlimited ingress and egress
Shakti Cloud Serverless with - 1 x 16 GB L40S Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 16 GB L40S
-
CPU: 4 Cores
-
RAM: 64GB
-
Storage: 128GB
-
Unlimited ingress and egress
Shakti Cloud Serverless with 1 x 24 GB L40S Per Sec
-
Shakti Cloud Serverless Runtime scaling Instance with
-
GPU: 24 GB L40S
-
CPU: 8 Cores
-
RAM: 128GB
-
Storage: 258GB
-
Unlimited ingress and egress
Microsoft Azure AI Services
Experience the power of Azure AI services on Yotta Shakti Cloud with cost-effective pricing designed for optimized performance for your AI workloads.
Product Name |
Service |
Price (Starting from) |
Explore Solutions |
---|---|---|---|
Shakti Cloud Azure ML studio |
Azure ML Studio running on Shakti Cloud with H100 GPUs. |
₹ 264 / GPU / Hr |
Know More |
Shakti Cloud Azure Database Services |
SQL Managed Instance I General Purpose (PaaS) |
₹ 23.54 / Core / Hr |
Know More |
SQL Managed Instance I Business Critical (PaaS) |
₹ 62.48 / Core / Hr |
Know More | |
PostgreSQL (preview) (General Purpose) |
₹ 11.08 / Core / Hr |
Know More | |
PostgreSQL (preview) (Business Critical) |
₹ 12.6 / Core / Hr |
Know More | |
Azure Arc-enabled SQL Server (Standard Edition) |
₹ 8.8 / Core / Hr *(VMs + Licensing cost will be additional) |
Know More | |
Azure Arc-enabled SQL Server (Enterprise Edition) |
₹ 33 / Core / Hr *(VMs + Licensing cost will be additional) |
Know More |
Services |
|
Price (Starting from) |
Explore Solutions |
---|---|---|---|
Azure Open AI |
₹ 13.2 / million inputs token |
Know More | |
Relational Database |
₹ 11 / Core / Hour |
Know More | |
Azure Open Datasets |
₹9.68 / GB |
Know More | |
Internet of Things |
₹ 1232 / Hub Unit (400,000 messages /day of 4 KB size) |
Know More | |
Open-Source Relational Database |
₹ 11 / Core / Hr |
Know More | |
Azure Analytics |
₹ 12.68 / vCore /Hr |
Know More | |
Integration |
₹ 52.8 / Million Operations |
Know More | |
Azure Web Services |
₹ 792 / app /month |
Know More |
Yotta Sarvam AI Services
Harness the capabilities of Yotta Sarvam AI Services with transparent pricing crafted for advanced AI model development, seemless deployment, and scalable performance tailored to your business need.
Name |
|
Price / Per Min |
|
---|---|---|---|
Sarvam Agents |
₹ 7.00 |
Name |
|
Price |
|
---|---|---|---|
Speech- to-text : Saarika |
₹ 30.00 per hour of audio |
||
Text-to-speech: Bulbul |
₹ 15.00 per 10,000 character |
||
ASR-Translate: Saaras |
₹ 30.00 per hour of audio |
||
Mayura |
₹ 20.00 per 10,000 characters |
Name |
|
Price |
|
---|---|---|---|
Parsing |
₹ 10.00 Per Page |
||
Call Analytics |
₹ 2.50 Per Minute |
Name |
|
Price |
|
---|---|---|---|
Legal Research |
₹ 100.00 Per Query |
||
Doc Ops |
₹ 3.00 Per Page |
||
Document Chat |
₹ 10.00 Per Query |
||
Data Analyst |
₹ 100.00 Per Query |
Add On Services
High Speed Storage
-
High Speed Storage for faster read and write data from local and remote
Object Storage
-
Object Storage Buckets to read and transmit data from local and remote
Public IP
-
Public IP for connnectivity
VPN
-
Virtual Private Network for Security
Accelerate AI with Shakti Cloud
