voiceai

Banner Image

English version δΈ­ζ–‡η‰ˆζœ¬

A curated, developer friendly learning path for building real-time voice AI agents from your first STT call to scaling production telephony.

Voice AI has moved from research demos into shipping product in under three years. The modern stack is converging around a clear pattern: a real-time transport layer (WebRTC or telephony), a streaming pipeline of speech-to-text β†’ LLM β†’ text-to-speech, and a turn-taking model that decides when the agent should speak. This list is structured to mirror that learning order start with the foundations, pick a framework, then drill into individual components and production concerns.

Resources are tagged 🟒 Beginner, 🟑 Intermediate, or πŸ”΄ Advanced. Prefer free official docs and vendor-neutral guides; flag where authors have commercial interests.


How to use this list

Read top-to-bottom if you’re brand new. The recommended path:

  1. Foundations β†’ understand the pipeline and latency budget
  2. Frameworks β†’ pick one (LiveKit Agents or Pipecat are the safest open-source bets) and ship a hello-world
  3. Components (STT, TTS, LLM, VAD, turn detection) β†’ swap pieces to learn what each layer does
  4. Transport & telephony β†’ connect to a real phone number
  5. Evaluation, production, ethics β†’ make it safe enough to ship

Table of contents

  1. Foundational concepts and learning paths
  2. Frameworks and orchestration platforms
  3. Speech-to-text (STT / ASR)
  4. Text-to-speech (TTS)
  5. LLMs for voice and real-time AI
  6. Voice activity detection and turn-taking
  7. WebRTC fundamentals
  8. Telephony and SIP
  9. Tutorials and hands-on projects
  10. GitHub starter repos and awesome lists
  11. Datasets and benchmarks
  12. Beginner-accessible research papers
  13. Evaluation and testing
  14. Production, deployment, and scaling
  15. Ethics, safety, and regulation
  16. Blogs and newsletters
  17. Podcasts
  18. Communities
  19. Conferences and events
  20. Hackathons and competitions

1. Foundational concepts and learning paths

Start here. These resources establish the mental model of the voice agent pipeline and the latency budget you’ll fight for the rest of your career.

2. Frameworks and orchestration platforms

The frameworks below all let you wire STT, an LLM, and TTS together. For open-source production work, LiveKit Agents and Pipecat are the two safest bets; for managed dashboards, Vapi, Retell, and Bland win on time-to-first-call.

Open-source frameworks

Managed platforms

Realtime / speech-to-speech APIs

Vendor-neutral comparisons

3. Speech-to-text (STT / ASR)

Pick one streaming STT and learn it deeply before shopping around. Deepgram, AssemblyAI, and Whisper-derivatives cover most use cases.

Commercial APIs

Open source

Benchmarks and explainers

4. Text-to-speech (TTS)

Latency, not raw quality, is what kills voice agents prioritize providers offering true streaming with first-byte under 200 ms.

Commercial APIs

Open source

Streaming and ethics

5. LLMs for voice and real-time AI

A voice agent’s perceived intelligence is bounded by how fast the LLM streams its first token. Sub-300 ms TTFT changes the conversation feel entirely.

Low-latency inference

Speech-to-speech models

Voice-specific prompting and tools

6. Voice activity detection and turn-taking

Pure VAD is no longer enough modern agents combine acoustic VAD with a small semantic model that predicts end-of-utterance from words and prosody.

7. WebRTC fundamentals

WebRTC is the default transport for voice agents that don’t run over the phone network. Understanding ICE, STUN, TURN, and SFU architecture is non-negotiable for production work.

8. Telephony and SIP

The phone network has its own physics. Once you know which SIP trunk provider to point at LiveKit or Pipecat, you can ship.

9. Tutorials and hands-on projects

Pick one tutorial and finish it before starting another. Voice AI is unforgiving of half-built pipelines.

10. GitHub starter repos and awesome lists

Clone these instead of writing boilerplate from scratch.

11. Datasets and benchmarks

You’ll rarely train from scratch, but knowing which dataset a model was trained on explains its accents, languages, and failure modes.

12. Beginner-accessible research papers

These are the landmark papers behind the models you’ll actually use. Read the Whisper and Common Voice papers first they’re unusually approachable.

13. Evaluation and testing

You can’t ship what you can’t measure. Voice-agent evaluation is fundamentally probabilistic a single transcript can pass and fail across runs, so simulation and statistics matter more than fixed test cases.

14. Production, deployment, and scaling

Real production voice infrastructure is the hardest unsolved problem in this space. Read these before quoting anyone a per-minute price.

15. Ethics, safety, and regulation

If you’re shipping a voice agent in 2026, disclosure and consent are no longer optional. The FCC and EU AI Act both have teeth.

16. Blogs and newsletters

Subscribe to two or three to stay current the field moves quickly.

17. Podcasts

18. Communities

19. Conferences and events

20. Hackathons and competitions


Suggested learning path

  1. Week 1 Foundations: Read the LiveKit pipeline post and Voice AI Illustrated Primer (sections 1, 7).
  2. Week 2 First agent: Finish the LiveKit or Pipecat quickstart end-to-end (sections 2, 9).
  3. Week 3 Components: Swap STT, TTS, and LLM providers; benchmark latency (sections 3, 4, 5).
  4. Week 4 Turn-taking & telephony: Add Silero VAD and a turn detector; connect a SIP trunk (sections 6, 8).
  5. Week 5 Production: Add evaluation, observability, and read the FCC/EU AI Act material (sections 13, 14, 15).
  6. Ongoing: Subscribe to two newsletters and join voice ai community in linkedin (sections 16, 17, 18).

Contributing

Pull requests welcome. Resources must be active in the last 12 months, accessible to developers, and vendor-neutral or clearly labeled when authored by a commercial party. Open an issue to suggest additions or removals.