Since you are already using ChatGPT service calls to enable your API integration, please consider making the AI server endpoint configuration configurable to point to local/on premise ChatGPT API compatible AI servers such as Ollama or LMStudio. This has a lot of potential gains, for what could be a relatively low cost to implement feature since you must already be sending ChatGPT formatted API requests.
- It would address privacy issues for users who want to use AI, but do not (or CAN not due security/IP requirements) want 3rd parties involved.
- For AI forward individuals or companies, it would allow for the replies to leverage locally fine tuned/trained LLMs and/or RAG operations to be able to assist in responses with contextual or specialty knowledge understanding in place. i.e., leverage LLMs tuned or trained on proprietary/private knowledge, or leverage Retrieval-Augmented Generation frameworks to pull inference from other traditional local knowledge sources that may already exist.
- That same code to implement this feature could be enhanced to also address ChatGPT open requests such as exposing the API Key used by the internal engine for those with paid ChatGPT licenses. It would also open the door to future capabilities to integrate with other API providers.
- If implemented, this feature will buy you ANYTHING you want for Christmas, and it will foster world peace with its presence alone!
(okay maybe that last one might not be ENTIRELY true… but you never know!