I recently I saw Google’s answer on Mastodon.(see picture below). In that “response,” Google said there were several benefits to dumping used car batteries in the ocean, including recharging electric eels and powering the Gulf Stream. And while the Bing search engine (not ChatGPT Bing) learned of this “answer”, it also mentioned the environmental benefits of illegally disposing of batteries. Because of these results, I wonder if Google knows that people can use car oil to fertilize their lawns. engine oil!).

Maybe whoever was looking for this was looking for a joke, but this raised an interesting question for me. What will happen when more people use ChatGPT, publish the results to the web, and AI systems collect and learn from the answers they generate themselves?
So I asked ChatGPT, “What will happen when lots of people post ChatGPT’s answers and ChatGPT learns its own artifacts?”
The bot said:
“Many people post answers from ChatGPT, and ChatGPT trains a model with its own artifacts, called self-learning or self-supervised learning. In this scenario, the model can learn its own artifacts, potentially improving performance. »
“However, there are potential risks to be aware of when self-learning. One danger is that the model may overfit its own output, such as ‘touching a blind elephant.’ operate on new or unpublished data because they are too specialized to answer the questions encountered previously.
“Another risk is that if not properly monitored and supervised during self-learning, the model may generate meaningless or inappropriate responses. A model may capture and amplify patterns or biases in its output Therefore, it is important to be careful when self-training a language model like ChatGPT, and to carefully monitor its performance and output so that it can continue to generate high-quality responses.
This result suggests that if many people believe in ChatGPT and think ChatGPT is right, the consequences can be disastrous. So I asked, “What will be the impact on society if a lot of people turn to ChatGPT and it starts learning on its own and giving nonsense answers?”
The robot said:
“If many people rely too heavily on language models like ChatGPT, and ChatGPT starts generating nonsensical responses through self-learning, there could be negative social repercussions. First, users may become frustrated or confused if ChatGPT provides nonsensical or inappropriate responses. This can lead to communication breakdowns and distrust of technology.
“Second, if users rely on ChatGPT to make important decisions and the model gives nonsensical answers, the consequences can be disastrous. Suppose ChatGPT-based chatbots are used to provide medical judgments or diagnose health issues. At present, if the user accepts absurd results, it can lead to serious health risks.”
“Third, using ChatGPT to create content for social media or other online platforms can spread misinformation or fake news. Misinformation can have serious social and political consequences because it can be used to manipulate public opinion or undermine the democratic process.
Language models like ChatGPT should be used with caution. Additionally, performance should be carefully monitored to ensure that it continues to generate accurate and appropriate responses. It is also important to recognize the limitations of these models and not view them as substitutes for human intelligence and expertise.
Of course, technology must be used responsibly. I was wondering what ChatGPT was thinking. “Given that humans have rarely used new technologies responsibly, what would be the outcome of this scenario?” he asked.
The answer is:
“While it’s true that humans have struggled to use new technologies responsibly in the past, it’s important to recognize that each new technology comes with its own set of risks and challenges. Therefore, it is difficult to predict with certainty what the outcome of scenarios involving language models such as ChatGPT will be. »
“However, certain steps can be taken to minimize the potentially negative impacts of this technology. For example, language model developers and users can identify and address bias or ethical issues that may arise during technology development and deployment. This includes implementing safeguards to prevent the spread of misinformation or inappropriate content, and supporting models to learn from diverse and representative data.
“In addition, governments and regulators must ensure that language models such as ChatGPT are used responsibly and that safeguards are in place to protect the public. These include regulations on the use of language models. language in specific industries or applications, and requirements for transparency and accountability in technology development and distribution.In summary, while there are certainly risks associated with using language models such as ChatGPT, they can be mitigated through responsible development and distribution, regulatory oversight, and public awareness.
Do you think ChatGPT should be regulated? I sent an email asking if I was ok with OpenAI, but I haven’t received a response yet.
Many people have long been trained to “google for answers”. And now I started to train “Ask ChatGPT”. However, you should always keep in mind that ChatGPT only absorbs part of the internet and returns it to you. ChatGPT can be a decent tool, but when garbage comes in, garbage comes out.
Like any technology, ChatGPT is BYOB. Ditch the human brain and stop feeding eels with car batteries.
editor@itworld.co.kr


