The biggest threat to humanity -- is people's repressed emotions of the child they once were -- and as long as people's repressed emotions are unresolved, they will be blinded by them and driven by them into the state of compulsion repetition to hurt and exploit others the same way they were hurt and exploited as defenseless little children. If people were not emotionally blind, they would be able to see the lies, illusions, and all the traps society constantly puts in front of them. And yes, an emotionally blind humanity with the aid of technology will destroy itself much faster.
https://sylvieshene.blogspot.com/2016/07/the-conversation-about-effects-of.html
"The neural network pioneer says dangers of chatbots were ‘quite scary’ and warns they could be exploited by ‘bad actors’ [I see bad actors everywhere acting as if personalities pretending to be good people, but are wolves in sheep's clothing, just like at my job of nine and a half years and at my last job of almost 8 years. https://sylvieshene.blogspot.com/2023/03/hard-evidence-of-my-ex-boss-being.html ]
The man often touted as the godfather of AI has quit Google, citing concerns over the flood of misinformation, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence.
Dr Geoffrey Hinton, who with two of his students at the University of Toronto built a neural net in 2012, quit Google this week, as first reported by the New York Times.
Hinton, 75, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field. He was brought on by Google a decade ago to help develop the company’s AI technology, and the approach he pioneered led the way for current systems such as ChatGPT.
...Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”
But, he added, he was also concerned about the “existential risk of what happens when these things get more intelligent than us. “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said. “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
He is not alone in the upper echelons of AI research in fearing that the technology could pose serious harm to humanity. Last month, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was “not taking AI safety seriously enough”. Musk told Fox News that Page wanted “digital superintelligence, basically a digital god, if you will, as soon as possible." Read more in the link below:
No comments:
Post a Comment