Jump to content
EN
Play

Forum

[Issue 56] [Debate] Man vs. Machine: The Winner's Debate


 Share

Man vs. Machine... get involved in the debate!  

45 members have voted

  1. 1. Will machines (specifically AI) ever take over the world?

    • Yes - I support this view
      17
    • No - I oppose this view
      21
    • Not sure
      7
  2. 2. Who do you think won this Winner's Debate?

    • falcosenna1
      17
    • DestroyerPdox
      14
    • They both did well, no clear winner!
      14


Recommended Posts

9NaqjPE.png

Zf2b4Q1.pngJDyeKwo.png

 

Last Issue, the results of the Man vs. Machine debating contest were announced. The challenge was for players to write a persuasive piece arguing why they either supported or opposed the AI-themed debate question. After judging many strong entries, we finally decided on the best piece written for each side of the debate.

 

But the contest wasn't over just yet. The two winners were then invited to take part in the ultimate Newspaper-hosted debate, a battle of champions, for the chance to win even more crystals... and here's what happened!

 

 

The_debate_question.png

 

The Participants

Host and observer: @GoldRock, Newspaper Administrator.
Debater for the motion: @falcosenna1, author of the Best Supporting Piece in the Man vs. Machine contest.
Debater against the motion: , author of the Best Opposing Piece in the Man vs. Machine contest.

Debate Format

 

Opening statements:
Each debater may type one, fairly lengthy message to start the debate. The debaters should use this opportunity to focus on outlining their initial positions and to introduce their strongest arguments, rather than getting into counter-arguments or responding to the opponent just yet. The debater for the motion will go first, followed by the debater against the motion. 
 
Quick fire debate structure:
Once opening statements have been made, both debaters are allowed to develop their own arguments or respond to their opponent. The only condition is that each debater will take it in turns to send one message each, however long or short, taking it in turns to avoid loss of debate structure. So the debater for the motion will start this once the debater against has given his opening statement. This section ends whenever both debaters agree to finish this, though the observer can also end this. 
 
Questions from the observer:
This is a good way for the observer to get involved. Here, the observer will ask both debaters 3 questions each - with each debater taking it in turns to answer a question. The debaters should take this opportunity to show off their skills at answering queries from a debate audience! 
 
Closing statements:
Finally, each debater will have the opportunity to make a closing statement, summarising the main arguments made during the debate and ultimately why his stance is the correct one. The debater for the motion will make a closing statement first, followed by the debater against the motion wrapping up the debate with his closing statement.

 

Opening Statements
 
The debate has started! Debater for the motion, please type an opening statement. Remember - focus on outlining your position and introducing your best arguments, not on responding to to your opponent's side just yet. 
 
falcosenna1: Will AI take over the world? It's a question that breaks quite some brains as it's a real hot topic. In my opinion, we must surely be aware of the factual dangers of artificial intelligence. There is no problem as long as we have full control over our humanoid robots, but will we be able to do so. I'm afraid the answer is no. We won't be able to fully monitor their behavior. Behavior is driven by different determinants, one of them being emotions. Emotions can make us unable to respond in an adaptive way to certain occasions. It might sound unthinkable, but the very first steps to creating AI with human emotions have already been made which can lead to losing control over our 'machines'. Furthermore, the most sophisticated robots are already able to think and learn for themselves, making them able to do several tasks that are not pre-programmed, and therefore making them able to adapt to any situation. Also, glitches are a real threat as well. We've had the Tankopocalypse which is a nice example. A sudden glitch can create a real disaster and disturb the programmed humanoid behavior of AI, and again we lose control.
 
DestroyerPdox: The concept of a computer system attaining sentience and control over worldwide computer systems has been discussed many times in science fiction. One early example from 1964 was provided by a global satellite-driven phone system in Arthur C. Clarke's short story "Dial F for Frankenstein". But, all of this, requires something to go wrong. A glitch can be fixed in a small amount of time. The AI can't 'lock' us out right? After all, the manual override system has already been implemented in a few conceptual (upcoming) AI; the AI of today is unable to perform such a task without letting off some hints.
 
'Quick Fire' Debate 
 
Right, now we'll move onto the 'quick fire' part of the debate. Starting with the debater for the motion, you can develop and counter arguments, perhaps following on from the opening statements. Remember - only one message each at a time, so it will be structured as: falco's message, Pdox's message, falco's message, and so on. This will finish when you both agree you're done with this part of the debate.
 
falcosenna1: The manual override system seems like a great deal I must say, though, as been shown in the past, it doesn't prevent all possibly dangerous situations. It's programmed in such a way that it intervenes if the situation meets certain 'requirements'.
 
DestroyerPdox: The words of American writer Elbert Hubbard, who claimed that “one machine can do the work of fifty ordinary men but no machine can do the work of one extraordinary man.” - If programmed correctly, (most computer languages point out mistakes you made before actually accepting your code) it is highly unlikely that AI will go rogue.
 
falcosenna1: Those are beautiful words, but I'm afraid they don't apply to the current evolution of humanoid robots. A group of Japanese researchers made it possible for robots to learn new things which have not been pre-programmed. In the past, a robot programmed for chess couldn't play a game of table tennis and the other way around. In the foreseeable future, a humanoid robot programmed for chess will be able to learn how to play table tennis. Hence future robots will be able to do the job of one extraordinary man.
 
Indeed, mistakes will be pointed out before being accepted, but, well, the robot won't be able to do a task with mistakes in its codes.
 
DestroyerPdox: How can a robot that is designed for chess master the moves of table tennis? Sure, there is 'evolution'. It would start off by actually learning table tennis itself. Champions aren't made in a day, it takes years and years for them to train and train and eventually, beat former champions. It would take some considerable time for the robot to take over both chess and table tennis, leaving a huge gap for its programmer/designer to re-write the code.
 
falcosenna1: Indeed, with current algorithms, the process of learning and accomplishing non-programmed tasks takes quite some time. In the meantime, programmers can indeed rewrite the code, but it's the fact that robots can learn for themselves, and think for themselves about how they must fulfill a job they have never done before, that can be a potential danger. How can a robot designed for chess master moves of table tennis? Well, we are not in that stadium as of yet, but the algorithm to let robots learn, has already been developed. 
 
For those interested:
 
DestroyerPdox: You gave us an example of a self-learning robot - SOINN. I have watched the video, and it's designer admitted that SOINN is basic computation 'device' It isn't an industrial sized machine. All it can do is pour water into a glass and then put a cube back in its position. It is a very basic self-learning system. If it doesn't know how to do something, or read a complex code, it halts and says "I don't know how to do this." This indicates that AI is very far from posing risks to human survival.
 
falcosenna1: Up to now, it's far from posing risks. And indeed NOW the job that robot has done for the first time ever seems simple, though for a robot it's already a nice job. Now, the robot will indeed say it doesn't know how to do a certain task if it's too difficult as it has no sources to learn from. Though, in the future, more suchlike robots will exist and robots can use the Internet and each other as a source of information. It's indeed very primitive at the moment, but it's a cool example to show what robots will be capable of once more developed. It's a nice breakthrough.
 
To return to your opening statement, what do you call a 'small amount of time' to solve a glitch?
 
DestroyerPdox: I indeed agree. It's a nice breakthrough. But if arguably one of the most famous scientists, Stephen Hawkings predicted that AI will takeover the world, why do humans build such robots in the first place? Even if they do build such AI, they take precautions. One of these is the override I said earlier. The other major precaution is corrupting the code. If you know how to program, you can easily corrupt the code it was provided in the first place to perform such tasks. In the SOINN video, the complete code can be seen. If I were to put some extra commas or spaces in there, it would render the AI useless.
 
A small amount of time means the time needed for an AI designer to go on his computer and do what I said above ^
 
falcosenna1: Why people build such robots is a real concern to a lot of well known people, such as Hawkings, Bill Gates, Elon Musk. I have no idea about the why. Maybe it's just the human nature that pushes us as far as possible. You can indeed solve a glitch in a decent time to minimalize the consequences by your solution given, at least when the researchers are close to their 'labs'. If not, it'd take quite some time though. As for the time, I actually meant an amount, like how many hours would it take. 
 
Also, in the future, there'll be more than 1 such robot. There will be hundreds, thousands of them. They'll be walking between us, indistinguishable. If one starts to behave 'badly' at some point, it can cause some great damage before being shut down then, right.

DestroyerPdox: Well since you have no idea why, I can't really provide a good argument. The robots are already operating amongst us. Sports robots, gardening robots, flying robots and many others. All have been working as they are supposed to work till now. Nothing bad has happened.
 
I am currently watching this video showing insider access to one of the America's 'killer' robot - It operates in war zones where it has the responsibility to drop bombs. It's designer said that this robot is so perfectly created, it will never go rogue; even if it does fail to do what it has been programmed to, it can be shut down by anyone in a matter of seconds.
 
Link:

 
"A lot of people will see this robot as the AI Terminator, that's gonna turn on humans, bring about Skynet and the end of humanity. But in fact, we are developing robots that we don't want people to be afraid of" - Designer of that robot ^
 
falcosenna1: The why ain't important though. It's just the fact they're exploring and evolving in creating humanoid robots that's important here. Nothing bad has happened indeed, fortunately. That's because the devices/robots we use now don't contain really hard AI yet. They just follow their rules written in the codes. They are not aware of what they're doing. They just do what they're supposed to do. It's different for really hard AI which is about awareness, cognition and emotions. Up to now, robots with really hard AI have not been introduced to our society yet. Robots that are really being used now are just ordinary robots. As for awareness there's also a breakthrough.
 
DestroyerPdox: Like you said, robots with hard AI are still under development, and the progress done is almost negligible. It would be years and years before robots like that what you wrote are introduced. When they are introduced, humanity will also have progress to a level that can counter the rouge robots. What we don't see is that humanity will also have climbed up a notch. If the robots bring bullets to the fight, we would have thick bulletproof jackets and laser weapons. If they bring laser weapons, we would have mirrors, smoke or even better, cloaks. Humans will always be capable to outsmart AI. 
 
"Everything has a creator, and the creator knows best." - Unknown
 
 
falcosenna1: (Could you maybe explain cloak with your own words instead of pasting a link) :P
 
DestroyerPdox: Using electromagnetism to redirect electric-light waves. In other words, bending the ray around you. You see how two magnets repel when the same ends are taken to each other?  If they bring lasers to attack us, we do this. We can counter them.
 
falcosenna1: It's a method to protect us you mean?
 
DestroyerPdox: Well yes, provided that robots do take over the world, which they will not in the first place.
 
falcosenna1: At the moment, the humanoid robots with really hard AI are still in development. That's a correct fact. I'd not say the current progress and breakthroughs are almost negligible. We build on these breakthroughs for further development so that the development goes faster and faster, just like the Industrial and Technological Revolution. The first steps, the first discoveries seemed negligible, but they accelerated the whole progress. It's like the graph below.

Q2gpswq.jpg


(mind my graph skills)

Still, it'd take quite some years, hence Hawkings and Gates are talking about the future 100 years. I agree we'll make our tools better and better. What I don't agree with is that we will always outsmart AI. The video about the robot and the glass of water shows how robots are able to learn and think for themselves, indeed primitive at this point, and they'll be able to adapt to any situation. As shown in the video, the robot screens its environment, very slowly at this point. Though it's the basis on which researchers build to further develop so that robots screen the situation and environment in less than a second. They'll get smarter and smarter, and eventually outsmart us.
 
"First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I don't understand why some people are not concerned".
~Bill Gates
 
In other words, their level of intelligence will increase drastically and will be a concern.

DestroyerPdox: Well, the graph you showed is I am afraid, incomplete. What is the x-axis and what is the y-axis? Furthermore, what you said is true. They will get smarter and smarter but they will take some incredible time to achieve a level equal with humans. The thing is, they will never outsmart us. I wrote you an example of that. We will block their bullets with armor, their lasers with cloaking. If we shoot their heads off, what will the robot do? Now you see humans fall back to bullets - which I am sure will be not the current technology of that time (30-40 years later). If a robot goes rogue, bust its head off. If it still lurks, bomb it.
 
The words of Bill Gates - upon reading with a clear mind - can be seen to be contradicting each other. First he says machines will be positive and be in our control if we manage them well. If we keep managing them well, there is no way the robot could then go out of control.
 
In other words, their level of intelligence will decrease drastically and the robots will always 'fear' humans that is, remain in control of us.
 
Sure, you must've seen AlphaGo beat the Go champion of the world. Go is incredibly harder than Chess. All you need to do is gather more territory than your opponent. But, Go is highly complex, vastly more complex than Chess, and possesses more possibilities than the total number of atoms in the visible universe. Compared to chess, Go has both a larger board with more scope for play and longer games, and, on average, many more alternatives to consider per move. Sure robots can do that, but that robot can't just stand up, grab a gun and go boom, now can it?

falcosenna1: The graph was only to visualize how the evolution will continue. With every discovery and breakthrough made, the evolution accelerates faster and faster. If you want it completed with an x-axis and y-axis, here you go. Take the time as the x-axis and the evolution and development as the y-axis. As to who will outsmart who, I’m still for the side of the robots taking the development of emotions, learning, thinking and consciousness into account. Heading back to the video, the robot screens its situation and environment pretty slowly, but in the future he’ll do it in less than a second making him able to adapt perfectly, protect himself and unpredictably attack us. I’m saying unpredictably as I’m afraid we’ll be too late setting up mirrors all over the world, handing out bullet-proof jackets to the whole population, making the cloak thingy possible at any time anywhere as robots can behave ‘badly’ anytime, anywhere. 
 
Bill Gates is talking about 2 different periods in time. In the current time, nowadays, robots are all positive if we manage them well. And we do manage them well as you have also said that up to now no bad things have happened due to robots. We are now able to manage them well. His concern goes to our future. He follows the development of humanoid robots quite well and knows the danger of robots with the ability to learn, think and to be conscious. He is concerned about how their intelligence develops, and he sees the possibility that they’ll outsmart us. Keep in mind that those robots who can learn have the whole Internet as an information source.
 
The robot you’re talking about cannot grab a gun and shoot us down indeed. It’s an ordinary robot only able to do the job it’s programmed for without any really hard AI to give it other abilities.

DestroyerPdox: What he doesn't see is that how much mankind will have progressed by let's say a predictable time for an AI takeover, 2050. By that time, the population will be explosive. There will be no funds to carry out such research will already requires millions of dollars to do. Humans will have ruined themselves. That's the end. No food, no land, no water, no money, crisis, riots, protests, terrorism, end of humanity. So not robots. As I said, let a robot go rogue, then punch a hole in it's head.

BOOM. It's dead.

falcosenna1: Well, you must be able to hit him before it hits you. :D So I think it'll depend on who will outsmart who. We shouldn't make robots too intelligent, too human-like to avoid dangerous situations. As for the funds to finance such researches, well, keep in mind the money spent to exploring the universe for planets with possible extraterrestrial life, the billions of dollars spent to weapons for wars, millions for presidential campaigns,... We could build houses for all homeless people and feed all people to the obese level, but the rich and mighty seem not to care about that and prefer spending money to other things for which they'll always make sure to have money.

DestroyerPdox: Well you are now contradicting your statements. You said robots will take over the world and now you say we shouldn't make robots too intelligent to avoid danger. Okay.

falcosenna1: It's not contradicting. I said we should not make them too intelligent to avoid those situations, but I'm convinced we'll push ourselves further and further to make them more and more intelligent. That's when it gets dangerous and that's the main thingy Gates, Hawkings have concerns about. 
 
DestroyerPdox: Well okay, now you are on my side of the debate now, that is, opposing...
 
falcosenna1: wait wait :P I'm not on your side, you don't get my statement I'm afraid! Where did I say I'm on your side? :lol:
 
DestroyerPdox: Hm, you said "We shouldn't make robots too intelligent, too human-like to avoid dangerous situations."
 
falcosenna1: Wait. I think you don't fully understand the meaning of that sentence. Let me explain.
 
DestroyerPdox: I do. You are basically saying we shouldn't make robots too intelligent so that they exceed our intelligence and eventually beat us.
 
falcosenna1: We should not do that to avoid the dangerous situations, but we'll make them too intelligent, too human-like as that's the goal of researchers. You don't get the 'shouldn't' part :P
 
DestroyerPdox: That brings me to my final point, robots will never exceed our intelligence. Why? Because we have programmed their AI. If we have programmed their AI, it is virtually impossible for them to exceed our intelligence and re-write their own codes, period.

falcosenna1: That's a nice point of view and I'm sure a lot will agree. Mine is more pointed towards the future and specifically towards the evolution of really hard AI which makes robots human-like with their own emotions, thoughts, learning and thinking capabilities, consciousness. With first discoveries made, it's just a matter of them for AI-robots to get higher levels of intelligence, especially when their informations sources are unlimited. The future will tell us and I hope we can have this debate again in 50 years or so. 
 
DestroyerPdox: Yeah, I really hope we can have this debate again in 50 years time. We can then see what's up.

Questions from the Observer

Now we'll move onto questions from the observer (myself). I'll do this by asking falco 1 question, then Pdox 1 question etc. until both of you have answered my 3 questions for each side. 

 
GoldRock: Firstly... falco, during the course of this debate, you mentioned that the reasons why people continue to develop AI to potentially dangerous levels are irrelevant to the debate about whether an AI takeover will actually happen or not. 
 
However, surely the psychology behind the master can help us to understand the capability/potential of the creation (the AI), so to speak?

falcosenna1: The masters are trying to push their limits further and further and with recent breakthroughs and algorithms, they're managing to do so. Their goal is to create robots which will be indistinguishable from humans, meaning giving them abilities we have like consciousness, adapting to different situations and surroundings, learning, thinking, emotions. It'll have his own feelings and set goals for himself just like humans do. That's when we lose full control and that's when the original psychology and goal of the master is actually gone and of no matter. It can help us understand the capabilities at first, but when the robot's emotions and goals (made by the robot itself) take over, the psychology of the master can't help us with understanding the robot anymore.

GoldRock: I see, interesting answer! Along similar lines... Pdox - you've spent a lot of time talking about how AI could easily have safeguards or be stopped. 
 
But if malicious humans intentionally make use of AI to 'take over the world', or at least to dominate an area or a country - how would AI not take over the world if an 'evil genius' were to create hugely difficult barriers to accessing the code and preventing this takeover?

DestroyerPdox: Well yeah, that's a great question. Mankind has always been doing things wrong in the past, and I am sure there are a lot of examples as well. You see how a robot requires testing and an official license before it can be put into use. Anyone just can't like stand up, write a destructive code, design a robot and let it go free and kill anyone in its path. That's more like what we saw in Call of Duty: Black Ops 3. But then again, we saw mankind defeat them even though they were dominating a certain area. First of all, it requires money. A lot of research, designing and recruitment is needed to design a single robot. If it can't run, it can't shoot either. Then the license, which is provided by the respective areas of the government. If the government sees the robot as unfriendly, there is no way it will provide a license to that company.
 
Then come the Three Laws of Robotics (Oh yes, they are real.):
 
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
 
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
 
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws were introduced in a science fiction story written by Isaac Asimov. They now form the base on which all robots are made.

GoldRock: Falco, you mentioned the Technological Revolution, but this is actually partly slowing down because we're reaching the limits of some technologies. For example, computer chips - there's only so small/efficient you can get! However, your graph showed 'accelerating development' for AI in the future. 
 
Therefore, how will this 'accelerating development' for AI be achieved? In what areas - coding, hardware or somewhere else?

falcosenna1: I see the evolution of AI as a Revolution on itself flowing from the Technological Revolution. We're indeed reaching limits of some technologies like the computer chips you mention. Though my graph showed an acceleration indeed. Every revolution has showed acceleration in the past as researchers can build on previous experiences and acquired knowledge. The accelerating development of AI will mainly be achieved in the area of algorithms. With the invention of the SOINN algorithm, researchers have achieved the breakthrough of letting robots think, learn and adapt to situations, still primitive. It's this algorithm, just like the current found algorithms for emotions and awareness, researches must build on. Adjusting the algorithm or using the experiences of this algorithm to create new algorithms can contain big jumps towards the final goals.
 
GoldRock: Ooh right, I see - thanks for clarifying that! 
 
Pdox, you used the idea that humans will have practically ruined themselves by 2050 as an argument why AI will not develop to the stage of being able to achieve 'takeover'. 
 
Do you have any sources or reasoning that you can give to back up this claim, considering that the development in areas including AI will surely continue if this claim is untrue? 
 
After all, I don't think many experts expect the 'end of humanity' to be quite so soon!
 
DestroyerPdox: First of all, by 2050, it is estimated that the world population will have exploded to 9.6 billion, which is, I'm afraid, explosive growth. According to UNFAO, that will cause major food shortage. Need for more food will require more land for farming. But at the same time, more people will require more houses. There will be a shortage of land throughout the world. Sure we can create artificial islands, but we can't just make a full-fledge continent. Secondly, global CO2 emissions will be almost 55.87 gigatonnes, which could have disastrous effects. Prices on Carbon will increase, and the market will turn to research on absorbing this huge amount of Carbon Dioxide from the atmosphere, which will lead the world of Robotics go slightly downhill. This major catastrophe will have terrible effects on climate and the economy. Thirdly, the average life expectancy will be an estimated 76 by 2050 - which means more food.
 
The core question will become: Can you afford to still go on researching new technology, or find a solution to the world's problems?
 
Vittorio Colao, the chief executive of Vodafone, said geopolitical tensions and rising extremism did not give snooping governments a "license to do everything".
"There is no doubt, governments have a duty to protect their citizens. But that should not be a license to do everything because a balance has to be reached."
 
Marissa Mayer, the boss of Yahoo!, agreed that requests for data had to be proportionate. "Users own their data, they need to have control," she said.
 
Marc Benioff, the chief executive of cloud computing company Salesforce, added: "Trust is a serious problem for the internet. We all have to step up to another level of openness and transparency if we are to address this problem. People need to know where their data is, who is using it and why, and where there is a problem, we need full disclosure, no secrets. That's not where we are today."
 
You see, all these major people agree that data on the Internet should be protected. They are already designing more complex algorithms will robots have a slim chance of cracking. Robots can't crack voice recognition, facial recognition and touch systems. 
 
There, we will be protected from all sides, but eventually it will be us who will be the main cause for our deaths.
 
GoldRock: Finally, back to falco - you said that AI is likely to 'unpredictably' attack us as it gets too intelligent, further reducing our ability to prevent their 'takeover'. However, many people seem to have the foresight that this 'takeover' could be a problem, including Bill Gates who you quoted - and Pdox has pointed out that we are already taking steps to prevent this problem. 
 
Considering the development of AI is done by humans themselves, surely our governments would see this possibility of 'takeover' coming and protect against it, whether evil humans intentionally start to develop rogue AI or AI starts to get too intelligent?
 
falcosenna1: The robots are not and won't be designed to attack us, so if they attack, it'd always be unpredictable except for the case of evil humans of course. Emotions drive behavior and we can't control their emotions. Certain situations can trigger certain emotions which can then cause unpredictable behavior. It's indeed humans who develop AI, but once developed, the humanoid robots with the human-developed algorithm for learning and thinking, start developing themselves. It's somehow like humans, which is the goal. We develop ourselves, we learn, think, have emotions and with some emotions triggered, like anger and jealousy, humans can show unpredictable behavior, like murdering someone. The government cannot protect us from an individual all of a sudden murdering people, let alone hundreds, thousands of individuals (robots) with, on top, an incredible level of intelligence.
 
GoldRock: Now moving onto the final question for Pdox... you said that humans will always be able to outsmart AI. However, in some ways, it is clear that computers already outsmart us. They can do calculations that we would find impossible to do quickly in milliseconds, for example. 
 
What if AI combines its ability to learn for itself with its 'super fast' thought processes - how could we compete if they can someday learn and calculate a way around our bulletproof jackets faster than us, or the other 'protections' we put in place?
 
DestroyerPdox: You are wrong. Computers exceed our limitation barrier, not outsmart us. The human brain is designed that way. If computers were to outsmart humans, our jobs would've been replaced by robots, which hasn't happened and never will. The chance for robots to take over dentistry, surgery (As in the main surgeons themselves) are quite slim - to be exact, they are less than 0.4%. Robots will never replace teachers, they can never understand children. You will not see R2D2's working in law enforcement and leadership positions ever. Imagine having the SOINN as your President. RIP.
 
You remember what I said? Mankind will have progressed way above bulletproof jackets by 2050. There will be metamaterial cloaking (Like how our Earth works against solar radiation (Gravitational field)) that will deflect any shell coming towards us. 'What if' is a phrase that poets should use. Even a dead man could stand up and say, "What if I had one more day". What if robots don't take over the world? What if robots cease to exist in the upcoming years? You see, these are some pretty thought-provoking questions. I don't see robots as the dominant species, like, ever.
 
GoldRock: Good points, thanks for that answer! 

Closing Statements
 
To finish the debate, each debater will summarise the main arguments made during the debate and ultimately why his stance is the correct one. As always.. falco goes first!

falcosenna1: To come to a closing statement, I’d like you to keep in mind we’re talking about the future right now. We should get rid of our everyday view on current ordinary robots who can’t but follow programmed rules for one specific task only.  We’re talking about a whole new generations of robots right now, robots with way more capabilities. Recent breakthroughs show what we can expect from the future, being robots with the ability to learn and think for themselves, to solve problems they’ve never done before, to screen their and adapt to any situation and surrounding.  Robots will be aware of existing. They’ll be conscious and have emotions. As said, robots will learn, with the Internet as an information source, meaning the flow of information is unlimited. An important note to make here is that all information is stored, hence these robots will never forget anything. That is why they become smarter and smarter, and obtain a level of intelligence that surpasses ours. Emotions will cause unpredictable behavior with the possibility that it’ll kill dozens of people out of anger, jealousy or because it doesn’t want to be a slave of humans, which the robots can understand due to consciousness. By the time we can stop the robot, many victims will already have been made. Unfortunately for us, there’ll be thousands of them who can show such divergent behavior. It’s only a matter of time to find the perfect algorithms to realize such human-like robots. We shouldn’t wonder if, but when such robot with all mentioned qualities gets introduced on earth. You have been warned.
 
DestroyerPdox: “Darwin is 11 years old;
He weighs 60 pounds;
He is 4 feet, 6 inches tall;
He has brown hair;
 
His love is real;
But he is not” 
 
Since we both have reached our final points and our standing still remains, I would like to conclude a short wall of text. 
 
There are some jobs AI will never replace humans in: teachers, dentists, doctors, surgeons approximately all medical jobs, lawyers, policemen, firefighters, pilots, crew of airplanes and ships, lifeguards and rescue personnel, military and security personnel, singers, vocalists, artists, designers and whatnot. Here, by replacing humans I mean completely eradicating the need of humans in these fields. People have been spending billions of dollars to ensure mankind security from cyber-related issues and the security of today is well and good in case an AI Takeover does happen. This was only done on the suggestion of scientists like Stephen Hawking, Musk et cetera. What those two didn’t happen to realize is mankind’s progress. Mankind will have reached levels of impregnable security measures. One more thing, mankind will, and I’m sorry to say, are the cause of the destruction right now. Overpopulation, food shortage; land shortage; electricity, coal, gas, drinking water are slowly but surely are being ‘removed’ from our environment. It will be our fault if we were to die because of these issues in the near future. Population to reach 9.6 billion in 2050? That’s impressive but lethal at the same time. CO2 emissions to reach almost 60 gigatonnes by the same year? Deadly. We will be to blame for our deaths in the future. Not robots.
 
Let’s now consider a robot uprising. While an uprising occurs, much time is needed. This will leave the ‘superhumans’ of that time enough time to deal with the situation. If a robot goes rogue, put a bullet in its head. If it still lurks, bomb it. When we bomb the AI, the so-called ‘mind’ of the robot, how come it will still raise its hand and cause more destruction. Human brain has always been able to outsmart computers. Do you see computers single-handedly piloting airplanes? What if the engine fails? The computer will think of it as a failure in the ailerons, the elevators. Only the human brain will be able to snatch the noise waves, interpret it as an explosion, immediately sending signal to respective parts of the brain and hopefully preventing a crash. Robots can’t do that and will never be able to.
 
“Robots don’t hold onto life. They can’t. They have nothing to hold on to – no soul, no instinct. Grass has more will to live than they do” - Karel Capek.

 

 

The Winner

 

After reading through and assessing the debate along with Night-Sisters (my fellow Newspaper Admin), we finally managed to come to a decision on who won. So, without further ado and with a dramatic imaginary drumroll...

The result is a tie! The Reporter Administrators felt that both debaters performed exceptionally well and that strong arguments were made for both sides, with the result being that neither was able to gain a significant advantage over the other. Thus, both falcosenna1 and DestroyerPdox will receive a well-deserved 10,000 crystals each for their efforts, on top of the 25,000 crystals each that they already won in the main Man vs Machine contest.

 

Congratulations to both debaters on an outstanding performance - well worthy of publishing in this article for you guys to read!

 

And that brings this article to a close. Which side of the debate do you agree with, and who do you think won the debate (you may even think the debater opposing your view did better)? Vote in the poll to show your support! In addition, feel free to share your thoughts on the topic, make your own arguments or respond to what either of our debaters said by posting below. We hope you enjoyed this article!

 

Zf2b4Q1.png

eVgViCZ.png
  • Like 9

Share this post


Link to post
Share on other sites

Disappointing, really. Half of Pdox's arguments can easily be shot down owing to human intelligence limitations and examples from history. The debate went off track far too many times. The question was never (will AI be ever smarter than humans? I mean.. Duh.. They will be. But the question was whether they'll take over the world).

  • Like 3

Share this post


Link to post
Share on other sites

Disappointing, really. Half of Pdox's arguments can easily be shot down owing to human intelligence limitations and examples from history. The debate went off track far too many times. The question was never (will AI be ever smarter than humans? I mean.. Duh.. They will be. But the question was whether they'll take over the world).

It's normal that during a debate, you go off track. During the debate, after having countered quite some arguments, both Pdox and I, we came at the point that an important factor to the question is who will outsmart who.

 

I hope you still had an interesting read. Gold, Pdox and I really enjoyed it. :D

Edited by GetG0ldB0x
  • Like 1

Share this post


Link to post
Share on other sites

Disappointing, really. Half of Pdox's arguments can easily be shot down owing to human intelligence limitations and examples from history. The debate went off track far too many times. The question was never (will AI be ever smarter than humans? I mean.. Duh.. They will be. But the question was whether they'll take over the world).

Disappointing? Read your paragraph one more time. Robots will only take over the world and raise their hands as the dominant specie if they beat us or if they outsmart us. You see, we have to bring the debate from 50 years back, to 50 years ahead. We can't just go: "Oh yeah, they will take over the Earth!". No. you have to provide examples and sources for your POV.

 

Second thing is, I don't think you really know the meaning of a debate. Each debater has to provide arguments the other side has given. falco was the other side and I was forced to reply such arguments because he gave me some first. However, I am in no way saying that falco provided me dumb arguments and then I had to write dumber messages in return; they were actually quite hard and I had to think and even go research before I made my move.

 

So yeah, that's it I guess. Have a nice day :)

Edited by Hexed
  • Like 2

Share this post


Link to post
Share on other sites

Disappointing? Read your paragraph one more time. Robots will only take over the world and raise their hands as the dominant specie if they beat us or if they outsmart us. You see, we have to bring the debate from 50 years back, to 50 years ahead. We can't just go: "Oh yeah, they will take over the Earth!". No. you have to provide examples and sources for your POV.

 

Second thing is, I don't think you really know the meaning of a debate. Each debater has to provide arguments the other side has given. falco was the other side and I was forced to reply such arguments because he gave me some first. However, I am in no way saying that falco provided me dumb arguments and then I had to write dumber messages in return; they were actually quite hard and I had to think and even go research before I made my move.

 

So yeah, that's it I guess. Have a nice day :)

 

The exact same goes for me. :)

 

 

A tie? Seriously? After all that it's a tie!? Oh well, I enjoyed the debate, so thanks! :D

I'm glad you enjoyed it. :)
  • Like 1

Share this post


Link to post
Share on other sites

Disappointing? Read your paragraph one more time. Robots will only take over the world and raise their hands as the dominant specie if they beat us or if they outsmart us. You see, we have to bring the debate from 50 years back, to 50 years ahead. We can't just go: "Oh yeah, they will take over the Earth!". No. you have to provide examples and sources for your POV.

 

Second thing is, I don't think you really know the meaning of a debate. Each debater has to provide arguments the other side has given. falco was the other side and I was forced to reply such arguments because he gave me some first. However, I am in no way saying that falco provided me dumb arguments and then I had to write dumber messages in return; they were actually quite hard and I had to think and even go research before I made my move.

 

So yeah, that's it I guess. Have a nice day :)

Easy, man.. I never blamed any one of you for the off tracking thing. I know it happens, I just didn't like it in general. I'll tell you one thing which I've said before too. Most people make a fault when extrapolating on AI evolution when they assume that complete AI evolution (up until the theoretical Singularity) will be managed and controlled by humans. They never think that even now AI's have already started developing AI. The compounded effect will make the evolution a geometrical progression. So, instead of the technological evolution graph being hyperbole and reaching asymptote (Like @GoldRock suggested), it will instead be a parabola (which Falco drew). With the current growth rate, it's nearly impossible to predict the growth rates and the the evolution stage 10-15 years from now. (Yeah, 10-15...not 100)

 

Edit: Since we are on the topic, let's talk about your closing statement. As is obvious from all your arguments, you are still stuck on the old (current) version of the robots. They go rogue, you shut them down. I'm sorry. You can't do that with AI. AI isn't anything you can touch or feel. It's a highly complicated sequence protocols and functions. (Just like our brain, except that an AI brain will be extended all over the Internet). And the Internet isn't something you can just pull the plug off of. A huge mistake you (and some guy the last time I made the same argument) make is comparing AI to current 'Smart' computer programs like autopilot in planes, chess programs, etc. What you're forgetting is that these are all examples of ANI (or Artifical Narrow Intelligence) which are made to do only one job at a better speed and efficiency than humans. The AI's we're talking about here will be AGI's (or Artifical General Intelligence) which will be similar to a human brain. Now, you say humans will evolve in their intelligence too and I agree. But the speed is slower than a snail. We're restricted by biological factors (like the size of birth canal means you can't grow a baby's head after a certain point and hence the brain too). AI, for one, works on electrical interface which is much faster and much more effective. Now, our DNA takes many generations to code one piece of information in our DNA but AI can do that in a matter of seconds. And the more intelligent an AI gets, the quicker it'll be able to acquire more intelligence. Within a few hours of formation of AGI, it'll make itself into an ASI (Artificial Super Intelligence). Now, I to give you an analogy here: A monkey is less intelligent than us. So much so, that it understands the concept of bricks. And it understands what a building is. But it can't grasp the fact that humans can make a building out of the bricks. It's limited intelligence won't allow it to understand that. Similarly, an ASI's intelligence will be at a much greater gap to ours than we are to monkey's. We won't be able to understand what AI is planning or doing, even if it tried to explain it to us. After all this, if it were to decide to end human race for any reason, you really think all our might will be anywhere near enough to stop it? Unfortunately, this isn't Hollywood.

You quote a writer who died ages before even computer was probably invented. Yeah... I don't think he's the best authority we have on robots (forget AI.. Just programed Robots).

 

Edit 2: Do read this article. I posted it in my original debate too. Wait but Why: AI Revolution

Edited by beaku
  • Like 2

Share this post


Link to post
Share on other sites

I get the feeling you want to have this debate with Pdox.  :D  :lol:

 

 

 

 

At first, it was not planned to publish the debate. Gold and NS decided to publish it though as they think it's an interesting and quality article. The quantity might scare you off, but it reads quite fluently. :)

Edited by falcosenna1
  • Like 1

Share this post


Link to post
Share on other sites

WOW amazing debates  :lol: These are people who can change the views and opinions of masses of people, with a few good reasons. Congratulations to both debaters on winning the original contest, and VERY good job on this debate as well!

  • Like 2

Share this post


Link to post
Share on other sites

Wonder when the Tanki newspaper will become the Tanki newspaper again...

 

If you look around, I'm sure you'll find that the vast majority of articles here are Tanki-related. We just think other content sometimes doesn't hurt, as it can provide variety for our readers. Hope you enjoy our articles ^_^

  • Like 1

Share this post


Link to post
Share on other sites

 Share

×
×
  • Create New...