I don’t mind the tool itself if you use it as such. I do mind when people use its output as the final product. See: the lawyer who used chatgpt for a legal brief
The lawyer fuck up is what happens when someone doesn’t know or understand the limitations of a LLM.
If you want a GPT model tailored and specialized for a specific task, you have to train it with custom data, fine tune it and tweak the model’s parameters. You cannot do that from the ChatGPT web/app, you need a custom implementation coded in Python or some other language.
There are some uis that allow for fine tuning (assuming you have an extremely high end rig designed for ml). For example ChatGPT alternative and DALLE alternative.
Thanks. I have a quite powerful rig, but at the moment I work with OpenAI’s API using GPT 3.5 Turbo using a custom (but shitty) Python script with a simple Gradio web interface. However, I mostly stopped improving or updating it months ago. As long as I don’t use LlamaIndex, the cost is quite low.
I’m glad you understand my point.
Chatgpt is not Google. It’s a language model that will give you something that looks like the thing you asked for it to provide. It can and will pull facts out of its recycle bin if it fits the cadence of what it expects the answer to look like.
ChatGPT is not Google, but sometimes it can work as a glorified search engine or even compete with asking in forums.
I’ve lost count of how many times ChatGPT has produced Bash or Python code for what I needed. Yes, sometimes the code is wrong and/or requires tweaking and sometimes I resorted to look into the documentation, but no one will answer faster and anytime of the day like ChatGPT does, at least not for free.
It’s a tool to aid in creating a product, not a tool that magics out a finished product. That’s my point.
Too many people use it as the latter instead of the former.
The person you first replied to asked you to see the legal brief as an example of why they mind using the output as the finished product. You then asked for an explanation. To which I asked you, hey, have you actually looked at that example? You have not.
What exactly do you want here, other than be argumentative for combative reasons?
Letting a language model do the work of thinking is like building a house and using a circular saw to put nails in.
It will do it but you should not trust the results.
It is not Google. It can, will, and has made up facts as long as it fits the format expected
Not at the very least proof reading and fact checking the output is beyond lazy and a terrible use of a tool. Using it to create the end product instead of as a tool to use in creation of an end product are two very different things.
I don’t mind the tool itself if you use it as such. I do mind when people use its output as the final product. See: the lawyer who used chatgpt for a legal brief
The lawyer fuck up is what happens when someone doesn’t know or understand the limitations of a LLM.
If you want a GPT model tailored and specialized for a specific task, you have to train it with custom data, fine tune it and tweak the model’s parameters. You cannot do that from the ChatGPT web/app, you need a custom implementation coded in Python or some other language.
deleted by creator
There are some uis that allow for fine tuning (assuming you have an extremely high end rig designed for ml). For example ChatGPT alternative and DALLE alternative.
Thanks. I have a quite powerful rig, but at the moment I work with OpenAI’s API using GPT 3.5 Turbo using a custom (but shitty) Python script with a simple Gradio web interface. However, I mostly stopped improving or updating it months ago. As long as I don’t use LlamaIndex, the cost is quite low.
I already use Stable Diffusion WebUI, tho.
Also the “fine tuning” I was talking about is this https://platform.openai.com/docs/guides/fine-tuning
I am aware what fine tuning is. It is available from the train tab while the base checkpoint is loaded in both cases.
I’m glad you understand my point. Chatgpt is not Google. It’s a language model that will give you something that looks like the thing you asked for it to provide. It can and will pull facts out of its recycle bin if it fits the cadence of what it expects the answer to look like.
ChatGPT is not Google, but sometimes it can work as a glorified search engine or even compete with asking in forums.
I’ve lost count of how many times ChatGPT has produced Bash or Python code for what I needed. Yes, sometimes the code is wrong and/or requires tweaking and sometimes I resorted to look into the documentation, but no one will answer faster and anytime of the day like ChatGPT does, at least not for free.
It’s a tool to aid in creating a product, not a tool that magics out a finished product. That’s my point. Too many people use it as the latter instead of the former.
100% agree.
Maybe, with lots of training, weaking and testing the latter could be achieved, but that’s it.
Why do you mind that?
Have you seen that legal brief?
No. Communicate please and we can have a real conversation.
The person you first replied to asked you to see the legal brief as an example of why they mind using the output as the finished product. You then asked for an explanation. To which I asked you, hey, have you actually looked at that example? You have not.
What exactly do you want here, other than be argumentative for combative reasons?
Letting a language model do the work of thinking is like building a house and using a circular saw to put nails in. It will do it but you should not trust the results.
It is not Google. It can, will, and has made up facts as long as it fits the format expected
Not at the very least proof reading and fact checking the output is beyond lazy and a terrible use of a tool. Using it to create the end product instead of as a tool to use in creation of an end product are two very different things.