LM Studio and Ollama options don't work for AI actions.
complete
Al Gusarov
Just bought the app today, it's great, but I'm a bit disappointed by this bug/issue.
So I can chat with both LM Studio and Ollama models without any issue, but AI action don't work for them, perhaps related to previous 'system' role mentioned in one of the issues here. Video is inside, in the video I:
- Demonstrate that I can chat with LM studio hosted model and what happens when I do an action with it
- Do the same with Ollama
Link to publicly shared video on GDrive:
Naveen
complete
thank you for the detailed feedback. fix this in v0.24. please let me know if you find any more bugs.
Al Gusarov
For reference, this is the log for when using chat window for the same request (it works fine there).
[2024-09-03 08:28:03.029] [INFO] Received POST request to /v1/chat/completions with body: {
"model": "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf",
"stream": true,
"messages": [
{
"content": "You are FridayGPT, a helpful assistant.",
"role": "system"
},
{
"role": "user",
"content": "ix all the grammar errors in the text below. Also change phrasing if current version is too archaic or can’t be easily understood by native modern and relatively young speaker of relevant language. Reply on with fixed text, nothing else.\\n\\nText: testing this apprch"
}
]
}
Al Gusarov
Yes, just did a quick test, this is LM studio server log, still using 'system' role only and so it doesn't behave correctly. It's strange that the same build (FridayGPT 0.22 here) results in an issue for me but not for you, but this is what I get. I have also seen at least one another ticket here about the same issue.
[2024-09-03 08:20:46.230] [INFO] Received POST request to /v1/chat/completions with body: {
"messages": [
{
"role": "system",
"content": "Fix all the grammar errors in the text below. Also change phrasing if current version is too archaic or can’t be easily understood by native modern and relatively young speaker of relevant language. Reply on with fixed text, nothing else.\n\nText: testing this apprch"
}
],
"stream": false,
"model": "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf"
}
Naveen
Al Gusarov: I'm using system role to pass the prompt in AI action. Is this not correct? Are you not getting the expected response in AI Action?
Al Gusarov
Naveen, indeed I’m getting incorrect results, empty or gibberish, see the original video. I just double checked some docs online to be sure and indeed you are not supposed to use system message for passing user prompt, it will result in abnormal replies.
Al Gusarov
n many language models and APIs, including those that follow the OpenAI API format, the
system message
is typically used to set the behavior or context for the AI, whereas the user message
is where the actual prompt or question is placed. The model usually expects meaningful input in the user message to generate a coherent and relevant response.Here's a quick overview of how these messages are typically used:
- System Message: This is often used to define the role of the AI or provide instructions on how it should behave. For example, you might tell the model to act as a friendly assistant or to focus on providing technical information.
- User Message: This is where you provide the actual question, request, or input that you want the model to respond to.
- Assistant Message: This is where the model's response is generated, based on the previous context provided by the system and user messages.
If you pass your question in the system message, the model might not treat it as a prompt that needs to be answered, but rather as context or instructions, which can lead to unpredictable or nonsensical responses.
### Example of Correct Usage
- System Message: "You are an AI language model that helps users with coding questions."
- User Message: "How do I write a Python function to reverse a list?"
- Assistant Message: The model would then respond with code or an explanation on how to reverse a list in Python.
Al Gusarov
Could it be that it works in your case because you use very different model? Did you try Llama 3.1 8b as in my case (which is probably the model a lot of users are having based on its popularity).
Naveen
Al Gusarov: I understand the bug now. Thank you for the detailed logs.
Naveen
I made a quick video reproducing the steps you mentioned. I'm not able to reproduce the issue.
Are you still experiencing this bug?