Enterprise-Grade Security

Voice AI that lives on
your infrastructure

Deploy Ottex on your own cloud. Use your own AI models. Keep every byte of data inside your network. Zero vendor lock-in.

Your team wants voice AI. Your security team says no.

Every enterprise faces the same dilemma: employees want modern voice-to-text tools, but sending audio to third-party servers violates security policies. Most voice AI tools force you to choose between productivity and compliance.

Ottex gives you both

Your cloud. Your models. Your rules.

1. Deploy on your infrastructure

Ottex backend runs on your cloud - AWS, Google Cloud, Azure, or on-premise. Audio never leaves your network. No data goes through us. You own the entire pipeline.

2. Bring your own AI models

Gemini, Claude, GPT, Groq — or run Whisper, Parakeet, Mistral locally. Use your existing cloud credits or self-host. Fine-tune a model for your domain. Ottex works with all of them.

3. Plug into your existing auth

SSO with Okta, Azure AD, Google Workspace - whatever your company uses. Employees log in with their existing credentials. IT configures once, everyone gets access.

Enterprise capabilities

On-premise deployment
AWS, GCP, Azure, or bare metal
Bring your own AI
Any model, any provider
SSO integration
Okta, Azure AD, Google, SAML/OIDC
Data residency
Your choice — any region, any cloud
Reuse cloud credits
GCP, Azure, AWS — no double-paying
Compliance
Inherits your SOC 2 / HIPAA posture
Air-gapped mode
Runs fully offline with local models
Modern AI models
Gemini, GPT, Claude, Whisper, and more

Any model. Any provider. Any setup.

Ottex supports any OpenAI-compatible API for both transcription and text processing. If your provider isn't supported yet, we can add it in days, not months.

Cloud providers

  • Google Gemini (Vertex AI, AI Studio)
  • OpenAI GPT (Azure OpenAI, direct)
  • Anthropic Claude (AWS Bedrock, direct)
  • Groq API (Whisper, Llama)
  • Any OpenAI-compatible endpoint

Local models

  • Whisper (all sizes, including large-v3)
  • NVIDIA Parakeet
  • Mistral, Llama, Phi
  • Any model via Ollama or vLLM
  • Fine-tuned models for your domain

Your VPC / private cloud

  • Models deployed in your own VPC
  • Private endpoints (no public internet)
  • Custom fine-tuned models
  • Mix local transcription + cloud LLM
  • Full air-gapped operation

Built for teams that can't compromise on security

Financial Services

Trading desks, compliance teams, and analysts dictate sensitive communications daily. With on-premise Ottex, audio stays within your SOC 2 boundary.

Healthcare

Clinical notes, patient summaries, referral letters - all dictated without audio leaving your HIPAA-compliant infrastructure. Works with your EHR system.

Legal

Attorney-client privilege means voice data cannot touch third-party servers. Period. Ottex on-premise with your own AI models keeps everything within your firm's network.

Government & Defense

Air-gapped networks, FedRAMP requirements, classified environments. Ottex can run fully offline with local models. No internet connection required.

Technology Companies

Your engineers discuss proprietary code, architecture decisions, and trade secrets. On-premise Ottex with Gemini through your GCP credits.

Consulting & Professional Services

Client engagements under NDA, strategy documents, confidential reports. Voice data from client work stays within your network — not on third-party servers.

Stop paying twice for AI

Your company already pays for Google Cloud, Azure, or AWS. You already have AI model access through those platforms. Ottex plugs into what you have — no duplicate AI costs, no markup.

Use Gemini 3 Flash through your existing GCP credits

Use GPT-audio through your Azure OpenAI deployment

Use Whisper on AWS SageMaker with your own infrastructure

Average AI cost per employee: ~$0.50/mo. Top 5% power users: ~$2/mo

How it works with Ottex

Software
Ottex per-seat license
AI costs
Your existing cloud credits
Data location
Your own infrastructure
Vendor lock-in
None — swap models anytime

Lightweight. Deployable. Compatible.

Employee Devices
macOS, iOS, Windows, Android
HTTPS (Internal Network)
Ottex Backend
Deployed on your infrastructure
Auth Service
SSO (SAML/OIDC)
AI Router
Model Gateway
Your AI Models
Gemini / Claude / GPT / Groq / Whisper / Parakeet / Local LLM

Deployment options

  • Docker container (single image, 5-minute setup)
  • Kubernetes Helm chart
  • On-premise bare metal

Requirements

  • Any cloud with container support
  • Access to an AI model API (Gemini, GPT, Claude, or local)
  • SSO provider (SAML 2.0 / OIDC)

Frequently Asked Questions

Ready to bring voice AI inside your firewall?

Talk to us about deploying Ottex on your infrastructure. We'll help you set up a pilot for your team in under a day.