-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Open
Labels
questionFurther information is requestedFurther information is requested
Description
❓ Questions and Help
What is your question?
I'm trying to deploy paraformer-zh as an ASR model in a Python environment, following the example provided in FUNASR/examples. I used the exact same code without any modifications, but the output is unexpected — the model generates strange strings like "galaxy" or "galaxy xy".
I've double-checked the input audio's properties (bit depth, sampling rate, and data type), and everything appears to be correct. At this point, I'm unsure what might be causing this issue.
Code
from funasr import AutoModel
model = AutoModel(model="paraformer-zh", device="cuda", model_revision="v2.0.4")
result = model.generate(
input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav',
)
print(result)with a output
Notice: ffmpeg is not installed. torchaudio is used to load audio
If you want to use ffmpeg backend to load audio, please install it by:
sudo apt install ffmpeg # ubuntu
# brew install ffmpeg # mac
funasr version: 1.2.7.
Check update of funasr, and it would cost few times. You may disable it by set `disable_update=True` in AutoModel
You are using the latest version of funasr-1.2.7
Downloading Model from https://www.modelscope.cn to directory: C:\Users\Ludwig\.cache\modelscope\hub\models\iic\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
2025-11-14 04:16:36,063 - modelscope - INFO - Use user-specified model revision: v2.0.4
WARNING:root:trust_remote_code: False
rtf_avg: 0.078: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.29it/s]
[{'key': 'asr_example_zh', 'text': 'galaxy xy', 'timestamp': [[750, 2930], [3370, 4090]]}]What's your environment?
- OS (e.g., Linux): WINDOWS 10
- FunASR Version (e.g., 1.0.0): 1.2.7
- ModelScope Version (e.g., 1.11.0): 1.31.0
- PyTorch Version (e.g., 2.0.0): 2.4.0+cu118
- How you installed funasr (
pip, source): from git clone with wheel build - Python version: 3.11.9
- GPU (e.g., V100M32) RTX4070
- CUDA/cuDNN version (e.g., cuda11.7): 12.8
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1): no docker, no wsl :(
- Any other relevant information:
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested