An N100 would be a big step up from a Pi4, yes (tiny to base model). A much bigger step would be a machine with a GPU to run the large models.
26.12.2024 23:54 β π 0 π 0 π¬ 2 π 0@synesthesiam.bsky.social
An N100 would be a big step up from a Pi4, yes (tiny to base model). A much bigger step would be a machine with a GPU to run the large models.
26.12.2024 23:54 β π 0 π 0 π¬ 2 π 0I've heard the tiny model is pretty snappy on the Pi 5 (1-2s). The transcription accuracy is still low. There are 2 plans to address this: my Rhasspy speech add-on, and a modification to Whisper to bias it towards our commands.
FYI with HA Cloud and OpenAI, people get a great experience on the Pi 4.
I'm the main developer on Assist. This is the OOB experience locally with hardware below the recommened specs (N100).
I'm working on an add-on that will improve the response times on the RPI 4 for a limited set of commands (in beta).
The AI HAT is optimized for visual models not voice.