Web Dev Person / Ex Performance ECU Calibrations Person

  • 2 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • This project is entirely web based using Vue 3, it doesn’t use langchain and I haven’t looked into it before honestly but I do see they offer a JS library I could utilize. I’ll definitely be looking into that!

    As a result there is no LLM function calling currently and apps like LM Studio don’t support function calling when hosting models locally from what I remember. It’s definitely on my list to add the ability to retrieve outside data like searching the web and generating a response with the results etc…



  • Local models are indeed already supported! In fact any API (local or otherwise) that uses the OpenAI response format (which is the standard) will work.

    So you can use something like LM Studio to host a model locally and connect to it via the local API it spins up.

    If you want to get crazy…fully local browser models are also supported in Chrome and Edge currently. It will download the selected model fully and load it into the WebGPU of your browser and let you chat. It’s more experimental and takes actual hardware power since you’re fully hosting a model in your browser itself. As seen below.










  • SimpleDev@infosec.pubtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I disliked signal app wise, and Matrix app was a buggy mess for me and the 4 other people who tried to use it as well

    SimpleX was easy to setup and has been for the most part stable for all of us.

    Basically to answer your question, people like different things.

    SimpleX isn’t perfect by any means but it seems to be developed at a somewhat decent pace with noticeable improvements being made.