Private AI Voice Dictation for Enterprise Teams
Reclaim 4+ hours back every week from typing and formatting. Per person.
Deployed to your infrastructure.
Same Ottex value. Enterprise controls.
Ottex Enterprise keeps the core product promise intact, then adds the deployment, model, auth, and compliance controls your team needs.
1. Deploy on your infrastructure
Ottex backend runs on your cloud - AWS, Google Cloud, Azure, or on-premise. Your team gets the same voice input workflows without routing audio or data through us.
2. Bring your own AI models
Gemini, Claude, GPT, Groq — or run Whisper, Parakeet, Mistral locally. Use your existing cloud credits or self-host. Fine-tune a model for your domain. Ottex works with all of them.
3. Plug into your existing auth
SSO with Okta, Azure AD, Google Workspace - whatever your company uses. IT configures once, employees sign in with their existing credentials, and rollout stays simple.
What Enterprise adds
The difference is not the promise. It's the control layer around infrastructure, deployment, data residency, and compliance.
Bring the models and deployment setup you already trust
Ottex supports any OpenAI-compatible API for transcription and text processing. If your provider isn't supported yet, we can add it in days, not months.
Cloud providers
- Google Gemini (Vertex AI, AI Studio)
- OpenAI GPT (Azure OpenAI, direct)
- Anthropic Claude (AWS Bedrock, direct)
- Groq API (Whisper, Llama)
- Any OpenAI-compatible endpoint
Local models
- Whisper (all sizes, including large-v3)
- NVIDIA Parakeet
- Mistral, Llama, Phi
- Any model via Ollama or vLLM
- Fine-tuned models for your domain
Your VPC / private cloud
- Models deployed in your own VPC
- Private endpoints (no public internet)
- Custom fine-tuned models
- Mix local transcription + cloud LLM
- Full air-gapped operation
Built for teams that need the productivity without giving up control
Financial Services
Trading desks, compliance teams, and analysts dictate sensitive communications daily. With on-premise Ottex, audio stays within your SOC 2 boundary.
Healthcare
Clinical notes, patient summaries, referral letters - all dictated without audio leaving your HIPAA-compliant infrastructure. Works with your EHR system.
Legal
Attorney-client privilege means voice data cannot touch third-party servers. Period. Ottex on-premise with your own AI models keeps everything within your firm's network.
Government & Defense
Air-gapped networks, FedRAMP requirements, classified environments. Ottex can run fully offline with local models. No internet connection required.
Technology Companies
Your engineers discuss proprietary code, architecture decisions, and trade secrets. On-premise Ottex with Gemini through your GCP credits.
Consulting & Professional Services
Client engagements under NDA, strategy documents, confidential reports. Voice data from client work stays within your network — not on third-party servers.
Reuse the infrastructure and AI budget you already have
Your company already pays for Google Cloud, Azure, or AWS. You already have AI model access through those platforms. Ottex plugs into what you have — no duplicate AI costs, no markup.
Use Gemini 3 Flash through your existing GCP credits
Use GPT-audio through your Azure OpenAI deployment
Use Whisper on AWS SageMaker with your own infrastructure
Average AI cost per employee: ~$0.50/mo. Top 5% power users: ~$2/mo
How it works with Ottex
No compromises.
On-prem deployment. Bring your own AI contracts (Gemini, OpenAI, etc). Or your own GPUs. Customize each step.
Schedule a callLightweight. Deployable. Compatible.
The product stays simple for employees. The infrastructure stays controllable for IT.
Deployment options
- Docker container (single image, 5-minute setup)
- Kubernetes Helm chart
- On-premise bare metal
Requirements
- Any cloud with container support
- Access to an AI model API (Gemini, GPT, Claude, or local)
- SSO provider (SAML 2.0 / OIDC)
Frequently asked questions
Can Ottex run completely on-premise with no internet?+
Yes. With a local AI model (like Whisper + a local LLM), Ottex runs fully air-gapped. No data leaves your network, ever.
What AI models does Ottex support?+
Practically any model — cloud or local. Cloud: Gemini, Claude, GPT, Groq, and any OpenAI-compatible API. Local: Whisper (all sizes), NVIDIA Parakeet, Mistral, Llama, and any model served via Ollama or vLLM. You can even fine-tune a model for your domain and use it with Ottex. If a provider isn't supported yet, we can add it in days.
How does pricing work for enterprise?+
Ottex charges a per-seat license fee. AI costs are your own - you use your existing cloud credits. This means you're not paying a markup on AI usage, and you maintain full control over costs.
How long does deployment take?+
Typical deployment: 1-2 hours for a cloud setup, half a day for on-premise with SSO integration. We provide a Docker image and Helm chart.
Is Ottex SOC 2 / HIPAA compliant?+
When deployed on your infrastructure, Ottex inherits your compliance posture. Your data never touches our servers. We can provide architecture documentation for your compliance team.
Can employees use Ottex without any technical setup?+
Yes. IT deploys the backend and configures SSO. Employees download the Ottex app, click "Sign in with SSO", and start dictating. Zero configuration on their end.
Ready to give your team the 4+ hours back without giving up control?
Talk to us about deploying Ottex on your infrastructure. We'll help you set up a pilot with the right deployment, model, and auth setup for your team.
No credit card required.