ChatGPT has several features that were unavailable in previous versions of the GPT-3 large language model.
A chat interface
GPT Playground still enables you to ask questions and make requests. However, it is a canvas similar to a large text box that includes your prompts and the responses.
That still makes it a little awkward to ask follow-up questions or extend the original query. Should you delete everything and start over? Should it all remain? Does GPT get confused sometimes if you leave everything? Yes. The chat interface of ChatGPT removes this uncertainty.
Context maintenance
When you speak with a human, if you begin by speaking about planning a trip but then ask about landmarks and return to the trip, the conversation moves along without a problem.
That is because humans do a great job remembering the context of the entire conversation. Most chatbots do not. And GPT-3 playground did not.
The change in topic typically resets or clouds the context. ChatGPT is not perfect on this front, but it is fairly reliable.
That means you can reference something that you or the language model said earlier in the conversation, and ChatGPT will recall the context and build upon the earlier idea.
High quality, humanlike writing
You will also notice the writing quality is noticeably better than earlier large language models. ChatGPT is better at composing the prose of an answer and is also better at crafting it in a specific style than the earlier version.
Conceptual ideas
AI model text generations can sometimes sound like a list of facts and details delivered in a barrage of words. ChatGPT is also good at handling conceptual ideas, such as why something should be done or what ideas should be considered.
More details
ChatGPT will often go beyond just answering the question and offer additional details that may be useful context for understanding the response. That can lead to the serendipity of surfacing information that a user didn’t know to ask for but adds to the value of the answer.
A free service (for now)
ChatGPT is currently in beta testing, and OpenAI is using the interactions to further train and refine the GPT-3.5 model and the ChatGTP service.
Offer feedback on the responses
ChatGPT is free. A small contribution on your part will be to click the thumb up or thumb down button and potentially make a short comment about the response. We can all benefit from a better model, and feedback can help make that a reality.
What you Shouldn’t Expect
There are also some important factors that you should not expect to get from ChatGPT. Three of them are listed in the graphic above.
Truth
OpenAI says that ChatGPT “may occasionally generate incorrect information.”
Only inoffensive response
You may, according to OpenAI, receive offensive content in Chat GPT responses. The model is trained on the internet. There is plenty of offensive internet-hosted content, so that information is in the ChatGPT / GPT-3.5 training set. Some model fine-tuning and filtering were used to reduce the incidence of offensive material, but this will eventually occur.
Information about 2022
ChatGPT is not continually plugged into the internet. It was last trained in 2021 and, as a result, has virtually no knowledge of the last 12-18 months.
Source information
This is no different than when using GPT-3 Playground using the 3.0 model. ChatGTP will write confidently about nearly any topic, but don’t confuse confidence with accuracy. The model produces content that most often includes an authoritative tone of voice and may even cite facts. However, it will not provide any clues on the source of those “facts” or “ideas” or any other way to easily validate the veracity of the written responses. You will need to do your own source discovery and validation. This can be very hard to do as GPT-3.x models do not copy directly from internet information.
Humanlike reasoning based on your input
It is true that ChatGPT will often make mistakes related to the information you provide in a prompt when you ask it to tell you something about the prompt. For example, “My brother and I went for a run around the mountain. Do I have a brother?” ChatGPT may or may not get the right answer and may actually offer several reasons why it cannot be sure.
Math
While ChatGPT, like GPT-3 before it, can answer many math questions correctly, don’t count on it. There is no reason to believe GPT-3.5 was trained to do math calculations. The reason why GPT-3 can often do math is that it was trained on a great deal of data related to math. That means it can answer many math problems because it has seen the exact question or something similar in the past. This was a feature that OpenAI engineers said earlier they did not expect from the model. It appeared to be either emergent behavior or a function of the data it had seen. The latter seems most likely. However, GPT-4 could well include math and reasoning features.
A free service (for much longer)
When asked on Twitter whether ChatGPT will be free forever, Sam Altman responded, “We will have to monetize it somehow at some point; the compute costs are eye-watering.” He also indicated in a response to a Tweet from Elon Musk that each chat was costing “single-digit cents.” With more than a million users, those costs clearly add up quickly.
本文分享自 SmellLikeAISpirit 微信公众号,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文参与 腾讯云自媒体同步曝光计划 ,欢迎热爱写作的你一起参与!