DCS Overlord Bot

OverlordBot is an AI AWACS that listens to Voice communications on Simple Radio Standalone and uses machine learning to understand, and reply to, common AWACS radio calls.

The bot code is completely free & open source.

So, a bot that you speak to and it talks back with ATC and AWACs info within DCS. Uses SRS as the bridge to listen/transmit. Usually found running servers like GAW hoggit’s, so a good way to play with it there - you just use SRS with nothing to install your side. Server info here:

Setting up your own server is quite the job, as it uses a bunch of tacview importing, DB and Azure speech services, although you can ‘free tier’ a lot of these.

Demos: Some AWACs and ATC video examples:

The players guide for a good overview

Star Trek Wow GIF



Indeed GIF

This is quite nice.

So it’s like @discobot gets to run the AWACS then?

@discobot declare!


Hi! To find out what I can do, say @discobot display help.

It’s a complicated back-end for sure, so amazing work for them to get it going. The big downsides are the use of paid services for the speech input, as in you have to train the Azure model to recognize your pronunciations of your callsign and the various airport names (I can barely say them in English, poor Azure). For the speech output the nice voices also cost money.

The architecture is made up of a series of different projects:

(from: Hosting OverlordBot · Wiki · OverlordBot / SRS-Bot · GitLab)

One really nice thing is that TacScribe app, which essentially reads the TacView export from DCS live and then populates a PostgresSQL GIS set of tables - I can see that being useful for all sorts of things when you need a georeferenced live view of a running DCS server.

It certainly looks like a fun project for someone, so great they got it all working.

Looking forward to a point when there will be good open source pretrained speech recognition models. A typical gamer machine should eat the inferrence step of such a model for breakfast (maybe not while running DCS on the side, though).


There is Mozilla’s DeepSpeech (GitHub - mozilla/DeepSpeech: DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.) which is pretty good, and you’re right in that any modern GPU eats up the TensorFlow training.

I guess this project could have also used the local Windows PC services a bit more as well - things like VoiceAttack are just layers over the win32 training/local recognition ‘on device’. As people have to run SRS anyway, and DCS on the client is Windows only, that would have been an alternative way to go. The speech output Azure voices are very good though.

© 2021 Mudspike.com | Articles Website | Forums Rules & FAQ