Wow guys the Open WebUI team has been quietly rolling out some incredible features lately
I just discovered they added:
-
Video chat with LLMs that have vision skills. You can use your camera and ask it to describe what it sees. It does text-to-speech and speech-to-text right on your computer!
-
A tool library for function calling. It’s similar to what other AI platforms offer but I’m still figuring out how to use it.
-
Knowledge libraries for each model. You can load up PDFs to give models expertise on specific topics. Great for customizing models for different uses.
-
The ability to run Python code generated in chat. Super handy for testing stuff quickly.
There’s probably more I missed. These devs keep adding cool stuff without making a big deal about it. Has anyone made videos showing how to use the new features? I’d love to see some tutorials.