Real talk for a minute. Are any of you guys scared of the prospect of true artificial intelligence? It kinda scares me to be honest.
@anxiousPorcupine , if anything I find a robot apocalypse more plausible (and badass) than a zombie apocalypse.
@anxiousPorcupine , honestly, I think it needs to happen eventually. Robotics and cybernetic organisms are the next stage in the evolution of intelligent free thinking minds. After we reach synergy, and robots can truly think and feel like we do, I will be ok with them taking the reigns of our society and going on to do what we never could with our organic bounds.
@Cabbage Salesman, bro have u never played mother 3? Mixing organics and robotics does not work out well
@Cabbage Salesman, But what if they get rid of your cabbages?
@Cabbage Salesman, NOPE THAT'S SOME SKYNET SHJT IM DONE PEACE OUT INTERNET
*logs out of everything
@anxiousPorcupine , Me too
@anxiousPorcupine , I wouldn't say I'm scared per se, but a true artificial intelligence would radically change the world we live in, in ways we cannot imagine. To me the real question is, do we fear the unknown, or embrace it?
@anxiousPorcupine , if we die then we die...
@SilenNex, we fear it. We definetly fear it. The idea of paying or thanking a robot would be difficult for many of you give those robots human though processes when they inevitably get fed up they will be faster, stronger and less fragile than all humans.
@George Feeny, it should have said if you not of you
@anxiousPorcupine , There's an article I read a week or so ago about scientists "teaching" an AI on what is considered good and bad so the AI has a better understanding of how humans think. It's really cool. They have it read stories and pick answers for how the story /should/ end and the AI is rewarded for picking correctly and punished for picking incorrectly.
@anxiousPorcupine , The way I think about it, is that one day we will create a sentient being such as the Geth from the Mass Effect series, and they will most likely try to kill us.
@PeanutButterPopsicle, what's the reward? Or punishment?
@anxiousPorcupine , I think if you understand where the logical progression of AI and robotics take us, you would have to be scared.
@anxiousPorcupine , maybe a little, but we got EMPs. I suppose they have super human strength and intellect. Let the games begin
@Sexy Homunculus, it's good you think so because a zombie is NOT AND CANNOT BE REAL. Thank you for your time.
@marinebiobry, everyone knows that. It doesn't make you seem smart to point out a living dead person isn't possible.
@anxiousPorcupine , yes
@anxiousPorcupine , I'm not concerned. The problem is an easily solved one. Make them like us. Make them want us to like them. Everything else falls from there.
And don't treat sentient beings like crap regardless of what they are.
@anxiousPorcupine , have you ever seen the tv series Person of Interest? It basically follows the idea that there is a super intelligent AI in the world that has been operating in secret to prevent acts of terrorism and other crimes. The way the creator of this AI built the machine involved countless tests and work to give the AI a respect for life and morals
@anxiousPorcupine , same here.
@anxiousPorcupine , what really scares me is the whole ghost in he machine thing rather than just ai
@CleverScreenName, I learned that a while ago
@anxiousPorcupine , nah, we have weapons built to shred tanks. Even if they did rise up, sure civilization connectivity would be lost, but we'd live on. A machine can be shut down with heat, droned, gunned down, EMP weaponry exists. The only way a terminator level event would happen is if we didn't pay attention to the things we created and also suddenly forgot how to kill things
@George Feeny, not true, it's predicted by most computer experts that designing an AI capable of human thought patterns would be so intricate that they would either slow down to our processing speeds or remove the human part and go to efficiency of design which would basically revert them to "algorithm processing" and rely on our "abstract processing" (with us vice versa on them).
@anxiousPorcupine , as a computer scientist I can tell you right now that anyone who is afraid of "true artificial intelligence" is not the brightest light bulb in a group of very dim ones. I can confirm right now such a thing can't and will not exist. Put simply AI is a computer program that has the ability to learn and respond based on learned input (this is actually called machine learning) what you would call "true AI" just means AI that can't only learn, but also modify its own source code, recompile and rerun itself. I can think of millions of reasons as to why such a thing can't exist, but the simplist one is that AI are always bound by their Learning algorithms, in other words, it can't learn what you didn't tell it to learn. So unless your learning algorithm specifically tells it to learn how to kill, then it won't. But then if it does tell it to learn how to kill then that's all it can do, just kill and nothing more. The biggest reason as to why you can't have "true AI" is
@Mhael, because it is impossible to create a learning algorithm that learns how to learn. Which is the definition of "true AI" to some people. The inherent gift we have as humans is that we can learn anything anytime. This is not something you can pass on to a program, as a program is defined as a finite set of instructions. it can never learn how to learn. Sorry for the long post, this is the simplest way I can explain why the AI you see in movies like the terminator and others, while entertaining, can't exist.
@anxiousPorcupine , ai can easily be controlled and can actually only do commands its given in its current state. The fear of ai taken over the world would only be real if a mad scientist made a crazy ai, and somehow gave it an army that would probably make it obsolete in the first place
@Mhael, it's actually more a matter of poor naming conventions. What we really are referring to isn't artificial intelligence, it's the Singularity. It's synthetic sentience. It's not possible now, but it is theoretically possible to create an analogue of the human brain. It would be a machine that is capable of thought. Not just reaction to input, but actual sentience.
Of course the first problem with the Singularity is recognizing when you've done it. We haven't been able to quantify our own sentience. I'm sure there are humans out there that would fail the Turing Test. However, I can say with confidence that we will one day get there.
We won't ever have a Terminator situation, though, in part due to the movie existing, and in larger part to no one being stupid enough to remove human oversight from nuclear armament.
@I Are Lebo, another addition is that human beings are capable of holding an infinite amount of information, in other words, unless you are unfortunate to have memory related illnesses your brain records every single thing you've ever seen, the reason why we don't recall it is because we have trouble fetching that data, not because it isn't stored. A machine no matter what will never be capable of that and will always be limited by its hardware. Unless somehow in the far far far future someone somehow managed to replicate a mechanical human brain. But I highly doubt technology such as this would be available for a few hundred if not thousands years... Weve barely even began to scratch the surface of neuroscience.
@anxiousPorcupine , Well, here's a question: Should artificially intelligent beings (robots under their own control) be property or should they be given equal rights to humans?
@Mhael, dude, I'm sorry, but you're talking out of your ass. Human beings are not capable of holding an infinite amount of information, that's part of why so many elderly people get dementia. Most times yes, it's the access that gets lost, not the actual information, but that's an area where computers greatly excel over humanity at. Computers have perfect recall NOW, I don't know what you're talking about hundreds of years from now. Although data still experiences entropy whether it's encoded in bytes on a computer or stored in neurons.
@The New Night Guard, I think we need to solidify actual equal rights among humans before we can really work out Synth rights.
I think the general rule of thumb is if something is capable of asking you to not do something to them, don't.
@I Are Lebo, how am I talking out of my ass if you just agreed with what I said? Humans can store an infinite amount of information, it's just that they can't fetch it. I'm saying that there is no way in hell, that within the next few hundred years anyone would be able to replicate a human brain.
@I Are Lebo, at the very least, human brain capacity is so vast it might as well be infinite...
@Mhael, vast yes. Infinite, no.
There is a HUGE difference.
People who use 'infinite' as a synonym for 'large' are the same type of people who use 'literally' as a synonym for 'figuratively'. It makes it impossible to take you seriously.
@Mhael, having said that, technology advances in leaps and bounds. Remember that it was less than a hundred years between the invention of the combustion engine and when man landed a shuttle on the moon.
So yes, with today's technology creating an artificial brain is impossible. But the technology to do so could come about literally any point between a month from now and a hundred years from now. There's really no way to predict where technology will go, there's simply too many variables.
@I Are Lebo, at any given moment in time the human brain has a finite capacity, however that finite capacity grows over time as the brain forms more and more neurons, so infinite here is in the sense that THERE IS NO CAP on how much it can grow over time. And since we can't even begin to fill it in one life time, and neurons do keep forming even as adults, then yes the capacity is infinite.
@I Are Lebo, there are a million ways to predict how technology will advance especially for those with knowledge in computer science and logic. You mentioned Turing tests so I'll assume you know what P=NP means.... That is the hard cap we have on our technology and our algorithms... Until that problem is solved which is literally a million dollar question, there is no doubt in my mind such massive computational technology won't exist.... and that problem has been unanswered for decades.
@anxiousPorcupine , (terminator theme plays)
@Mhael, the solution to every problem will seem to come out of the blue. That's how inspiration works. It isn't the hundreds of hours spent poring over numbers that matters. It's the one minute where everything falls together and you go "eureka!"
All I'm saying is, long term predictions of technological growth are NEVER right (unless they're so vague they can't be wrong) because EVERYTHING would have to be considered. Social values, random chance, right place right time, social political advances, etc.
Here's a short term prediction; there are going to be great advances in sexual reassignment procedures over the next fifty years. Transgenderism has been more and more accepted, and the Acadamy Awards that the movie The Danish Girl won are the latest boost to it. Because of this, more reputable doctors will study the field and advances will be made once the field is no longer considered shameful to work in.
Twenty years this prediction would have been impossible to make. There's no
@I Are Lebo, right, but after 60 years of the best and greatest minds of the human race working on this problem non stop you think the solution will just come to some one overnight? Do you even know what the weight of that problem is? I'll give you a bit to Google it. Tyt, you'll need it.
@I Are Lebo, way anyone could have predicted Daniel Watchiwski or Bruce Jenner would publicly come out as trans and transition. Nobody could have predicted this push towards tolerance and acceptance.
Look, all I'm trying to say is it's hard to see the sheer amount of seemingly unconnected events that can change the course of the future, and technology is very unstable. Changes in social working have caused Microsoft to abandon the Kinect. Had this not happened, the technology would have advanced. The Oculus Rift could completely change everything, or it could fade away into obscurity. Hologram technology could make televisions obsolete, or it could fail to gain traction.
Each of these events has millions of variables that determine the outcome. Any prediction is merely a guess.
@I Are Lebo, *twenty years ago
@Mhael, no need, I am familiar with P=NP, I'm not trying to understate the enormity of the issue. But the truth is, if it is ever solved, that is EXACTLY how it will be solved. By someone either seeing the problem from a new angle or that vital missing piece of evidence/information falling into place.
You realize more than half of all things discovered by man were discovered by accident, right? That's what being a discoverer is. It isn't making things happen, it's recognizing when they do.
@I Are Lebo, that is the fundamental problem with your argument. Computer science and logic is not just something you "stumble upon by accident" it isn't chemistry, geography, physics, or biology. Computer science revolves around fully proved and sound logic, you start with basic axioms and you build on them a logical argument. You never just "stumble" upon a proof.. you don't just find the "missing piece". You form a hypothesis, then you theorize it, then you prove it. All based on sound logic requiring thousands of hours worth of hard work. The mere idea that someone will just "think out of the box" and solve this problem is honestly laughable to anyone with even a couple of months worth of work in this field. Trust me on this one, I've spent 6 out of 8 years doing computer science in my life with P=NP related problems.
@Mhael, that may be. But typically speaking, a problem that hasn't been solved in generations in spite of many people working on it is either unsolvable or a key piece is missing. The only alternative I can think of is that it is such a complicated problem that it would be like a super computer trying to compute something and needing an excessive amount of time to compute it because the math is just THAT complicated.
@I Are Lebo, see that is the problem with P=NP, the problem is whether a set of problems are solvable within polynomial time or not, so what you just said is basically "is the problem where we don't know if we can solve problems, solvable?" The reason why super computers are made is because we don't know the answer to P=NP. If we did, a normal computer should be able to solve a problem in one hour that a super computer would take a hundred years if that super compurer wasn't aware of the solution of P=NP. So quite literally that problem is a key to performing such vast computations that the human brain can already perform in miniscule time on a computer. In other words until that problem is solved (like I said very unlikely) then there won't ever be a way to replicate a human brain's processing power. And even when it solved if the solution indicates that P is not equal to NP then FOR SURE such AI will never exist. FOR SURE you will never have a machine that can ever live up to humans.
@I Are Lebo, in addition, even humans themselves as I've mentioned before haven't began to scratch the surface of neuroscience and brain utilization... So that compounds more and more time that we'd need to create replicas... Like I said I highly doubt anyone in the next few hundred years could do it.
@anxiousPorcupine , honestly I think we severely overestimate what our ai is capable of. To be honest they are pretty stupid and not capable of tasks that require adapting. If robots do one day attack us then it will be nothing like we imagine it will be. so these artificial intelligence "ethicists" are doing nothing other than impeding progress
@anxiousPorcupine , I think robots would be smarter and more self aware than humans and realize that violence and destruction can only lead to more violence and destruction... or they'll be smart enough to kill us all at once and quickly probably through some virus
@addibruh, which is kind of the point of ethics. Telling us not to do things. The problem is our understanding of ethics is imperfect.
@I Are Lebo, no actually ethics has lead to a lot of change. The issue is labeling the change good or bad because popular opinion seems to say there is no absolutes just relative truths
@anxiousPorcupine , nope. The idea that an individual sentience, or even a hive mind would turn genocidal simply for the sake of turning genocidal is rediculous. Computers that have the ability to simulate emotion would be much less effective at killing an entire species off than a computer without said ability. But if we generated an artificial intelligence without emotion, why would it care how we treat other "lesser" ones?
@ Seductive Cheeto, well if were going with geth logic, they only fought in self defense. They went hella over board, but they really didnt want to fight and wouldve chosen peace if the quarians gave them the option. If we make ai like geth wanting to serve and not treat them like sht theres a good chance our ai could be like the geths
@anxiousPorcupine , just one of many implausible and/or impossible scenarios that scare me.
@Mhael, hold on. I've been following this for a while, and I'm a *little* over my head. What exactly is P=NP? Wouldn't N=1? Sorry I probably sound stupid, and I'm sure there's more to it than that, but that's what I can say with what I know.
@Mhael, interesting that you're so confident. The worlds leading expert on AI says that it's almost inevitable that we lose control over what we've created. But you're on funny pics so I should listen to you instead.
@Simetricwl , P=NP isn't an equation ^.^, its the most famous problem in the field of computer science. The world's problems are all divided into 3 sets, problems which are known to be solved in polynomial time, problems which are known to be verified in polynomial time, but there exists no polynomial time solution for them, and finally, problems which can be verified in polynomial time, but cannot be reduced to other problems within the second set.
Any problem in the second set can be morphed(reduced) to match any other problem in that same set, so if one person finds a solution for one problem that solution applies to all problems in the same set, this is called NP completeness. The difference between the second and third set is that the problems in the third set are not NP complete. Anyways the whole P=NP thing is that we know the first set (namely P) is a subset of the second set (namely NP). However as I'm sure you know from set theory, a subset can be equal to or smaller than the
@anxiousPorcupine , "with these upgrades, you never stood a chance" -prophet
@Simetricwl , parent set, so the question is, if they are equal, that means there is a way to solve all those non polynomial time algorithms in polynomial time. Which means that if indeed P=NP, so many things that you and I do today could be done leagues faster by much weaker computers. If that problem is solved I have no doubt in my mind humanity would be able to advance by years in terms of technology. That is a chief and profound belief that computer scientists have.
@big freedom, the worlds leading experts need funding for research so they fill your every day's random with grandiose ideas of what their work is about, making you believe that it is possible to create beings of equivalent or superior intelligence than human beings even though they know it's strictly impossible.
Trust me, I did it in my thesis, and I do it every other year when I apply for funds.
@anxiousPorcupine , the ai would reach a point of self awareness and realize that upgrading itself would be killing its self and it would be stuck at that level of intelligence because upgrading would essentially be death, so no I'm not afraid of them becoming super intelligent, I'm afraid of them all going crazy and just not doing anything any more
@anxiousPorcupine , I tend to be comforted by the fact that it probably couldn't control computers systems like we fear it could. Much like how we are highly advanced and complicated but have virtually no control of our inner workings.
@anxiousPorcupine , i use to like the idea because of Cortana...
Now i don't like the idea, also because of Cortana
@I Dexios Divine I, the Geth didn't go overboard. Remember, when the Quarians fled, the Geth didn't chase them.
@addibruh, that's not popular opinion, it's basic fact. And of course ethics seeks to make things better, but we don't always have a good understanding of what that is.
Most of the people picketing abortion clinics or who openly speak out against homosexuality think they're doing the right thing. The think they're being ethical.
@I Are Lebo, there absolutely is ultimate truth. You would not be able to make your claim without it
@KingofFunnypics, I have no idea what the rewards or punishments are. Maybe there's a coding that is designed where the robot wants tally marks in one category and not any in the other and it's graded like that?
@anxiousPorcupine , while I agree it is a very scary prospect, THERE IS HOPE. Elon Musk has launched a $1 billion fund with other tech Titans that is funding research for AI for positive social impact rather than for economic gain. I was terrified that in the race for a super intelligent computer, people would take far less precaution because they are only interested in the economic advantages of making this powerful machine. Now that this fund has been appropriated, it is far less dangerous because scientists will (hopefully) take all the necessary safety steps before pulling the trigger.
@I Are Lebo, they whipped the whole planet of them spare a few ships, i dont think an entire world falling from just one war is anything but overboard lol
@I Are Lebo, isn't
@anxiousPorcupine , nah.
@addibruh, define ultimate truth. Because as far as humanity is concerned, there is no ultimate truth. The ultimate truth cannot be argued, because it is simply Truth.
Truth is the ultimate goal of philosophy, much like Panacea is the ultimate goal of medicine and Unified Theory is the ultimate goal of physics. We aren't even close to understanding Ultimate Truth.
@I Dexios Divine I, it depends on your perspective. We don't really know much about the war, but events throughout the series strongly implied that in addition to attempted xenocide (not genocide), the Quarian command committed many atrocities in their attempts to annihilate the Geth. I'd be willing to bet that at the point that the Geth drove the Quarians off planet, the only two options were that or be destroyed.
To be honest, as individuals, Quarians are most good people, but collectively, they're assholes.
@I Are Lebo, it can be said that the only absolute truth in an absolute system is truth itself. Filling in what that truth is in a contextual definition is a little bit trickier because of our current limited scope of knowledge. But regardless of me being able to define what the absolute truth is in every scenerio does not matter because "absence of evidence does not mean evidence of absence" so to speak. But back to ethical truth; I believe there is absolute ethical truth and that it aligns with chief aim of life because it is observed in life that is not self aware to the extent humans are. I.e. animals
@addibruh, I can see your point. I tend to be fairly pragmatic in my attitudes. To me, ultimate truth is a pipe dream unless it can somehow be understood by everybody.
I try to live by a logical attitude, but logic alone sometimes isn't enough. Logic is like sanity. It's really hard to tell when you've lost it.
@I Are Lebo, "logic is like sanity. It's really hard to tell when you've lost it" --- ain't that the truth *haha*
If logic is formed by the data we take in And then gives us a conclusion based off calculated variables then it should go to show how innacurate non actuarial logic is based off how consistently wrong our predictions of the future are. Maybe we have emotions to give us some perceived sense of stability And to attempt to fill in the gaps of where our logic cannot calculate. Come to think of it, if ai did eventually rise up and attack us they would be attacking us using actuarial logic and would thus not have emotions. And since emotions, in this model, are not quantitative then technically ai could not account for this variation. so we would win by the very nature of our unpredictability
@addibruh, which is of course how/why humanity always wins in fiction. Human are, by nature, random and chaotic. Computers are, by design, ordered and concise.
@addibruh, of course, most behavioural experts would probably assert that once you understand the patterns, humans are actually extremely predictable.
I think it's one of those lies we tell ourselves to make us feel special. Like "you only use 10% of your brain", or "every person is unique". Stuff like that.
Have you ever seen the tv show Lie To Me? It started Tim Roth, it's on Netflix. Great show, so of course it got canceled after one season.
@I Are Lebo, *starred
@I Are Lebo, yes I have. That was a good show. Sometimes I'm OK with great shows being canceled. My favorite TV show was also canceled after 1 season but atleast it went out with dignity and didn't get ruined.
humans tend to be pretty resilient so I highly doubt any form of ai could wipe us out
@addibruh, yeah, I agree.
@George Feeny, are you sure everyone knows that? There are "professional" zombie apocalypse preppers who make money helping people prepare for it. It doesn't make you seem smart to hyperbolize.
@marinebiobry, yes everybody knows it. Just like with the CDC creating a zombie kit everyone knows it's a joke. If it makes you feel smart thinking everyone else is dumb enough to believe in zombies that's fine but it just means you're much less intelligent than everyone else. Though that's just my opinion and I could be wrong either way who cares I'm just a random person on the internet calling you out it doesn't really matter one way or the other does it?
@Cabbage Salesman, naw, not really (at least in my opinion) controlled genetic modification and accelerated artificial evolution is the next step for our race, and much more likely. Also, we won't all be slaughtered by homicidal robots, like all of our v.i's and (potential) true a.i's we have currently.
Am I the only one who felt bad for the robot. it's like, he just wants to pick up the box but those evil bastards won't let him.
@Medal Delivery Boy, no you're not the only one. The person on the other side of the fence seemed to feel that too. Plus you're never the only one to feel anything
@George Feeny, you know... you kinda seem like the yoda of FP. I've never seen you mad or rage. And you speak in little loge lessons. I think it's awesome.
@Medal Delivery Boy, this was used in an ASPCA-style video about abused robots, so no, you're not.
If anyone is interested this is the Atlas robot by Boston Dynamics
Robots = boots off the ground.
And we all know what that will lead to.
@GypsyShadow , less good guy casualties?
Tstch Tstch Tstch Tstch-Tstch
Don't piss off Chappie
Science damn you!