Fangjun Kuang
Committed by GitHub

Update readme to contain links for pre-built Apps (#1853)

... ... @@ -4,31 +4,17 @@ Please refer to
https://k2-fsa.github.io/sherpa/onnx/android/index.html
for usage.
- [SherpaOnnxSpeakerDiarization](./SherpaOnnxSpeakerDiarization) It is for speaker diarization.
- [SherpaOnnx](./SherpaOnnx) It uses a streaming ASR model.
- [SherpaOnnx2Pass](./SherpaOnnx2Pass) It uses a streaming ASR model
for the first pass and use a non-streaming ASR model for the second pass
- [SherpaOnnxKws](./SherpaOnnxKws) It demonstrates how to use keyword spotting
- [SherpaOnnxSpeakerIdentification](./SherpaOnnxSpeakerIdentification) It demonstrates
how to use speaker identification
- [SherpaOnnxTts](./SherpaOnnxTts) It is for standalone text-to-speech.
- [SherpaOnnxTtsEngine](./SherpaOnnxTtsEngine) It is for text-to-speech engine;
you can use it to replace the system TTS engine.
- [SherpaOnnxVad](./SherpaOnnxVad) It demonstrates how to use a VAD
- [SherpaOnnxVadAsr](./SherpaOnnxVadAsr) It uses a VAD with a non-streaming
ASR model.
- [SherpaOnnxWebSocket](./SherpaOnnxWebSocket) It shows how to write a websocket
client for the Python streaming websocket server.
- [SherpaOnnxAudioTagging](./SherpaOnnxAudioTagging) It shows how to use audio tagging.
- [SherpaOnnxAudioTaggingWearOS](./SherpaOnnxAudioTagging) It shows how to use audio tagging on WearOS.
|Folder| Pre-built APK | Description|
|------|---------------|-------------|
|[SherpaOnnxSpeakerDiarization](./SherpaOnnxSpeakerDiarization)| | It is for speaker diarization.|
|[SherpaOnnx](./SherpaOnnx)| [URL](https://k2-fsa.github.io/sherpa/onnx/android/apk.html)| It uses a streaming ASR model.|
|[SherpaOnnx2Pass](./SherpaOnnx2Pass)|[URL](https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html)| It uses a streaming ASR model for the first pass and use a non-streaming ASR model for the second pass|
|[SherpaOnnxKws](./SherpaOnnxKws)|[URL](https://k2-fsa.github.io/sherpa/onnx/kws/apk.html)| It demonstrates how to use keyword spotting|
|[SherpaOnnxSpeakerIdentification](./SherpaOnnxSpeakerIdentification)|[URL](https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html)| It demonstrates how to use speaker identification|
|[SherpaOnnxTts](./SherpaOnnxTts)|[URL](https://k2-fsa.github.io/sherpa/onnx/tts/apk.html)| It is for standalone text-to-speech.|
|[SherpaOnnxTtsEngine](./SherpaOnnxTtsEngine)|[URL](https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html)| It is for text-to-speech engine; you can use it to replace the system TTS engine, e.g., use it in a e-book reader app|
|[SherpaOnnxVad](./SherpaOnnxVad)|[URL](https://k2-fsa.github.io/sherpa/onnx/vad/apk.html)| It demonstrates how to use a VAD|
|[SherpaOnnxVadAsr](./SherpaOnnxVadAsr)|[URL](https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html)| It uses a VAD with a non-streaming ASR model.|
|[SherpaOnnxWebSocket](./SherpaOnnxWebSocket)| |It shows how to write a websocket client for the [Python streaming websocket server](https://github.com/k2-fsa/sherpa-onnx/blob/master/python-api-examples/streaming_server.py).|
|[SherpaOnnxAudioTagging](./SherpaOnnxAudioTagging)|[URL](https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html)| It shows how to use audio tagging.|
|[SherpaOnnxAudioTaggingWearOS](./SherpaOnnxAudioTagging)|[URL](https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html)| It shows how to use audio tagging on WearOS.|
... ...
... ... @@ -2,6 +2,11 @@
This directory contains flutter examples of `sherpa-onnx`.
| Directory | Pre-built App |
|-----------|---------------|
|[./tts](./tts)|[URL](https://k2-fsa.github.io/sherpa/onnx/flutter/pre-built-app.html#text-to-speech-tts-speech-synthesis)|
|[./streaming_asr](./streaming_asr)|[URL](https://k2-fsa.github.io/sherpa/onnx/flutter/pre-built-app.html#streaming-speech-recognition-stt-asr)|
# Ways to create an example
```bash
flutter create --platforms windows,macos streaming_asr
... ...
... ... @@ -6,3 +6,7 @@ with Object Pascal API to develop speech related applications.
**Documentation for this directory**:
https://k2-fsa.github.io/sherpa/onnx/lazarus/index.html
|Directory| Pre-built App|
|---------|--------------|
|[./generate_subtitles](./generate_subtitles)|[URL](https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html)|
... ...
... ... @@ -3,6 +3,12 @@
This directory contains examples showing how to use Next-gen Kaldi in MFC
for speech recognition.
|Directory| Pre-built exe (x64)|Pre-built exe (x86)| Description|
|---------|--------------------|-------------------|------------|
|[./NonStreamingSpeechRecognition](./NonStreamingSpeechRecognition)|[URL](https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.10.43/sherpa-onnx-non-streaming-asr-x64-v1.10.43.exe)|[URL](https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.10.43/sherpa-onnx-non-streaming-asr-x86-v1.10.43.exe)| Non-streaming speech recognition|
|[./StreamingSpeechRecognition](./StreamingSpeechRecognition)|[URL](https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.10.43/sherpa-onnx-streaming-asr-x64-v1.10.43.exe)|[URL](https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.10.43/sherpa-onnx-streaming-asr-x86-v1.10.43.exe)| Streaming speech recognition|
|[./NonStreamingTextToSpeech](./NonStreamingTextToSpeech)|[URL](https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.10.43/sherpa-onnx-non-streaming-tts-x64-v1.10.43.exe)|[URL](https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.10.43/sherpa-onnx-non-streaming-tts-x86-v1.10.43.exe)| Non-streaming text to speech|
Caution: You need to use Windows and install Visual Studio 2022 in order to
compile it.
... ...
... ... @@ -18,4 +18,6 @@ find nodejs-addon-examples -name package.json -type f -exec sed -i.bak 's/1\.10\
find harmony-os -name "README.md" -type f -exec sed -i.bak 's/1\.10\.42/1\.10\.43/g' {} \;
find harmony-os -name oh-package.json5 -type f -exec sed -i.bak 's/1\.10\.42/1\.10\.43/g' {} \;
find mfc-examples -name "README.md" -type f -exec sed -i.bak 's/1\.10\.42/1\.10\.43/g' {} \;
find . -name "*.bak" -exec rm {} \;
... ...