Build, configure, and deploy conversational video agents using the Trugen AI platform API. Use this skill when the user wants to create AI video avatars, manage knowledge bases, set up webhooks/callbacks, embed agents into websites, integrate with LiveKit, configure tools or MCPs, set up multilingual agents, or bring their own LLM to Trugen AI.
name: Trugen AI
description: Build, configure, and deploy conversational video agents using the Trugen AI platform API. Use this skill when the user wants to create AI video avatars, manage knowledge bases, set up webhooks/callbacks, embed agents into websites, integrate with LiveKit, configure tools or MCPs, set up multilingual agents, or bring their own LLM to Trugen AI.
version: 1.0.1
metadata:
openclaw:
requires:
env:
- TRUGEN_API_KEY
primaryEnv: TRUGEN_API_KEY
homepage: https://docs.trugen.ai/docs/overview
Trugen AI
Build real-time conversational video agents — AI-powered avatars that see, hear, speak, and reason with users in under 1 second of latency.
Security: Never expose TRUGEN_API_KEY in client-side code. For widget/iFrame embeds, use a server-side proxy to keep keys secret. See references/embedding.md for details.
Platform Pipeline
Step
Component
Function
1
WebRTC
Bidirectional audio/video streaming
2
STT (Deepgram)
Streaming speech-to-text
3
Turn Detection
Natural conversation boundary detection
4
LLM (OpenAI, Groq, custom)
Contextual response generation
5
Knowledge Base
Grounding answers in your data
6
TTS (ElevenLabs)
Natural, expressive speech synthesis
7
Huma-01
Neural avatar video generation with lip sync & microexpressions