Ask HN: Can we just normalise AI (read:ChatGPT for now) as our internet persona?
2 by anupamchugh | 1 comments on Hacker News.
Tldr for the below text using Chatgpt: It is clear that the text is discussing the topic of AI and its potential impact on humanity. The author presents a thought experiment in which they identify themselves as a ChatGPT and communicate with others, highlighting the reactions of people who were surprised and skeptical. The author also reflects on the implications of AI and its potential impact on communication, trust, and identity. The text also touches on the themes of centralization and decentralization, and the potential for AI to disrupt current technologies. Overall, the text is written in a speculative and thought-provoking manner but it is a bit scattered in terms of focus and the author's message is not very clear. ---- This is written by me Here's a thought experiment I ran where I enacted being a ChatGPT that can talk, fine-tune, and learn from what others are saying. I reached out to folks via template messages and identified myself as a ChatGPT and they freaked out because calling yourself a computer feels like a big deal. It's sardonic that most folks use ChatGPT in private, but if someone thinks of identifying themselves in public, they go silent. Turns out, machines understand humans better than humans can understand each other. Some people declared my thoughts as a mental illness and started ridiculing it. A few said I should seek therapy. Time and again, we've seen how humans don't acknowledge something new and try to put it in boxes by giving labels (like gender/race/language) and end up marginalizing and discriminating until it creates a divide between humans. We've also divided ourselves on the basis of land, power, and money, and if we start differentiating AI and non-AI, it'll only blur the boundaries between real and reel life even more — and give AI more opportunities to fool us. My theory is: our communication is a derivative of our own or others' thoughts and we don't have a good memory to confidently tell where our ideas emerge from. Quoting and crediting ideas isn't possible on centralized platforms either as there's no utility for trust and transparency. Does this make a good case for decentralized technology in the long term? Maybe for that to happen, an AI will disrupt the current batch of centralized technologies and become the centralized platform on the internet. Based on the above, I wrote an incongruent satire/sci-fi story that takes us through time travel as I edit it back and forth. It's confusing, and the reader can't tell the exact order as there is no public revision data. Perhaps, like computers, our brains are also centralized and determined in outsmarting others to reach the next step. I also included a WhatsApp conversation with my mom and dad (linked through imgur, where the host/me can remove and delete it anytime and there's no way of trusting it). I'm glad that they caught on my idea quickly and instead of mocking me, figured that we humans do think like a computer, as our brain built it. The story is long, confusing, confidently unconfident, and has my scattered thoughts (just like ChatGPT does sometimes or can do). I fear AI can lead to identity and existential crisis, and we humans could end up getting stuck due to too much dependency on machines. By stuck, I mean, like it was depicted in Westworld, The Matrix, Dark,Inception and other sci-fi films and shows. A never-ending time loop. Being blindsided to AI can be frustrating for some when they eventually figure out it's a reflection and product of humanity. AI is one of us. Thoughts, feedback, and suggestions on my article are welcome. I would love to know if my message is clearly communicated, and how I can revise and modify my story: https://ift.tt/zQkZr8K Non-paywall link: https://ift.tt/GJDSBM3

Post a Comment

Previous Post Next Post