I am trying to build a chat service that uses OpenAI as LLM and langchain for remembering the context.
The model I am using is "VectorStoreRetrieverMemory".
const memory = VectorStoreRetrieverMemory()
The backend is in Nodejs. Flow goes something like this :
- I make a call when a message is added.
- Right now, I load all the previous messages and add them in
memoryasmemory.save_context({input:inputmsg}, {output:outputMsg}) - then I make the call to LLM with the previous history.
This makes each call a very long since it has to add all previous messages at every message.
I wish to somehow save the memory object and just load that to pass it into the LLM call, updating it when returning a new message, and then saving the model again.
If there is a better way to do this pls do guide me.
I tried to find ways to save the model, but fail to find any.