|
Post by silverdragon on Oct 30, 2017 8:06:55 GMT
|
|
|
Post by silverdragon on Oct 30, 2017 8:19:29 GMT
My thinking. At the moment in time that we are at, I do not believe we are ready to make the decisions. I believe we must first ask, are we ready to let Machines make "Adult" decisions.
The philosophical questions referenced to above, I have dealt with in the past, and am dealing with in the present, because the days of automated transport are arriving, want them or not, they will be here. And soon. And I have been asked to participate. More than once.
At this time, I am not ready to participate... Because I am reluctant to allow automated transport to be part of my life.
At this time, until the populace can deal with automation, I am prepared to accept partial automation, in say a controlled zone. One where there are plenty of warnings that automated transport operates. Say a small light rail system. One where the vehicle is not so heavy that it cant stop in the usual space we expect of say a family car, or maybe a large bus.
We can deal with road vehicles that can stop in that way. We do not, however, have the ability to deal with Train sized vehicles and stopping distances that are not almost immediate.
And we do not have the ability to look at the problem of the unavoidable accidents that WILL happen and say "That is acceptable", in the same way we look at the usual figures of KSI on road traffic accidents and accept them.
The point is, in accepting machines, we "Expect" that they will be better than us at making those decisions, and will be able to stop faster than a human, and thus LOWER the KSI figures significantly...
As yet, I have not seen "expected" figures of KSI for machines and how having them will benefit humanity. Until that happens, I am respectfully resistant to allowing Driver-Less vehicles on the roads.
Your thoughts please?.
Are we ready for this yet?.
|
|
|
Post by the light works on Oct 30, 2017 14:05:34 GMT
you are assuming the driverless car has not already spotted the kids, wrestling on the hillside, and calculated the probability of them rolling onto the roadway, and ensured that it will be able to stop in time to avoid them or that it has a safe escape trajectory planned.
currently, I like the idea of 100% self-driving freeways, where the self driving cars would allow for a higher net traffic flow, resulting in faster transit from A to B, resulting from the elimination of human behavior. I'm not sure I feel comfortable with the idea of mixed traffic. if I have trouble predicting what the plank next to me is going to do, a computer will have even more trouble.
|
|
|
Post by c64 on Oct 30, 2017 18:06:09 GMT
The EU is actually debating automatic car ethics and had founded a commission. They often cite the problem "Should the car run over the stroller with the baby inside or rather run over the old man?"
Such a scenario isn't going to happen often. A simple risk analysis what the car can do more safely would be enough - or even a simple "first comes, first served" strategy.
The real Problem is this: "Run over the pedestrian or evade and crash into the tree?". While the tree isn't more worth than any pedestrian, it might kill or at least injure the passengers of the car. As a human driver, you don't have this problem, self-preservation instinct doesn't make you steer towards the tree. It is perfectly fine for a human to run over the pedestrian if anything else might be fatal for the driver. The robot is different, it was made by the car manufacturer and can't value it's own "life" or the life of it's owner over others.
Adn what about people committing suicide by running deliberately in front of cars or steering vehicles into oncoming traffic. How should the ethic program deal with that? How could it know it is a suicidal attempt or an accident and even if, what is the difference between suicide and accident?
|
|
|
Post by the light works on Oct 30, 2017 18:15:11 GMT
The EU is actually debating automatic car ethics and had founded a commission. They often cite the problem "Should the car run over the stroller with the baby inside or rather run over the old man?" Such a scenario isn't going to happen often. A simple risk analysis what the car can do more safely would be enough - or even a simple "first comes, first served" strategy. The real Problem is this: "Run over the pedestrian or evade and crash into the tree?". While the tree isn't more worth than any pedestrian, it might kill or at least injure the passengers of the car. As a human driver, you don't have this problem, self-preservation instinct doesn't make you steer towards the tree. It is perfectly fine for a human to run over the pedestrian if anything else might be fatal for the driver. The robot is different, it was made by the car manufacturer and can't value it's own "life" or the life of it's owner over others. Adn what about people committing suicide by running deliberately in front of cars or steering vehicles into oncoming traffic. How should the ethic program deal with that? How could it know it is a suicidal attempt or an accident and even if, what is the difference between suicide and accident? it will come down to a simple game of numbers with the car deciding what choice will have the lowest potential for injury and loss of life. which leaves the question of whether carjackers will be able to cause the car to crash by pushing an empty stroller in front of it.
|
|
|
Post by c64 on Oct 30, 2017 18:34:36 GMT
The EU is actually debating automatic car ethics and had founded a commission. They often cite the problem "Should the car run over the stroller with the baby inside or rather run over the old man?" Such a scenario isn't going to happen often. A simple risk analysis what the car can do more safely would be enough - or even a simple "first comes, first served" strategy. The real Problem is this: "Run over the pedestrian or evade and crash into the tree?". While the tree isn't more worth than any pedestrian, it might kill or at least injure the passengers of the car. As a human driver, you don't have this problem, self-preservation instinct doesn't make you steer towards the tree. It is perfectly fine for a human to run over the pedestrian if anything else might be fatal for the driver. The robot is different, it was made by the car manufacturer and can't value it's own "life" or the life of it's owner over others. Adn what about people committing suicide by running deliberately in front of cars or steering vehicles into oncoming traffic. How should the ethic program deal with that? How could it know it is a suicidal attempt or an accident and even if, what is the difference between suicide and accident? it will come down to a simple game of numbers with the car deciding what choice will have the lowest potential for injury and loss of life. which leaves the question of whether carjackers will be able to cause the car to crash by pushing an empty stroller in front of it. But how to number people? Everybody needs a transponder implanted so the robotic cars can know the value. E.g. Married w. kids, job importance, legal expense insurance or not, ....
|
|
|
Post by the light works on Oct 30, 2017 18:40:26 GMT
it will come down to a simple game of numbers with the car deciding what choice will have the lowest potential for injury and loss of life. which leaves the question of whether carjackers will be able to cause the car to crash by pushing an empty stroller in front of it. But how to number people? Everybody needs a transponder implanted so the robotic cars can know the value. E.g. Married w. kids, job importance, legal expense insurance or not, .... what it will boil down to is everybody is assigned a value of one. they will then be assessed for resilience of packaging. I.E. a person in a Volvo will be given a higher resilience rating than a child on a bicycle. the selected action will result in the least number of ones reduced to zeroes.
|
|
|
Post by silverdragon on Oct 31, 2017 7:56:34 GMT
you are assuming the driverless car has not already spotted the kids, wrestling on the hillside, and calculated the probability of them rolling onto the roadway, and ensured that it will be able to stop in time to avoid them or that it has a safe escape trajectory planned. currently, I like the idea of 100% self-driving freeways, where the self driving cars would allow for a higher net traffic flow, resulting in faster transit from A to B, resulting from the elimination of human behavior. I'm not sure I feel comfortable with the idea of mixed traffic. if I have trouble predicting what the plank next to me is going to do, a computer will have even more trouble. Having worked the problem, I know the complicated code required. The car probably has seen the kids rolling on the hillside, and probably has calculated trajectory, and probability. Then discounted them for now as potential hazard, but, still has them on "watch", in that if the progress within a certain danger zone, it will escalate their standing. This code has been "Lifted" from onboard radar of Fighter jets in how they process "targets" and put precedence towards the most dangerous when working out a firing order for weapons... Except they took out the weaponised bit. The ideas were taken and processed and re-coded for how an onboard radar for Cars works, and then they add in all the other sources of "sight", where "Sight" is any method of sensing, be that Radar, Lidar, optical, IR, UV, all the toys?. But did it see the car door about to be opened, did it see the ball roll, and you know what I keep saying about rolling balls collecting small boys?. In the area of hazard perception, if you watch what a computer spots as potential hazard, it gets close to being exactly the same as what I see, which is more experienced that your average "n00b" driver and therefore [x] number of times more. Except the vehicle can process all of the faster than a human. It may take me "seconds" to scan a new road as I turn onto that road, the computer can do it almost immediately... And if it was human, probably [cr@p] its self on how many hazards there are. But it only has one vehicle. The problem exists if two or more hazards suddenly change trajectory and converge on collision course within the braking distance of the vehicle... which one should it avoid, if it can at all avoid?. and if it cant avoid all collisions, which one to choose?. The current ideal is immediate stop and re-asses until it can be sure there is no danger of collision. But again, my 40 ton Volvo cant stop as quick as say a Bugatti Veyron on ceramic disk at 30 mph. And then, lets complicate things. Inside the vehicle, sat back from any desk, because why not have space, are four or five humans. Sudden de-acceleration to stop from colliding with "hero" lemming on a phone not watching the road, thats going to hurt the passengers?. If they are stupid enough to not wear a seat belt, and there is no effective airbag on board that at this time is dependant on design of having a dashboard in front of the humans, that may be a serious injury from being thrown out the seat. Serious question, as of yet, I have not seen front seats with airbags mounted in the back of those seats to protect rear seat passengers?. I wonder if there is a reason for that?. There will always be accidents and incidents when Humans get Hurt. The ethics problem is which one do you choose. Little old four foot lady or little young 4ft pre-teen?. Back to what the computer sees to what I see. The computer yet has to be able to use Reflected images from Windows, nor can it process shadows, it has yet to recognise a set of feet seen under a van that suggest human at the rear, and as yet, human disappearing down the side of van, will it step out into road, or, because I have seen its wearing the uniform of the firm emblazoned on the side of the van, is it more liable to swing a back door open. The Human mid is amazing, it can do all that and worry whats for lunch at the same time. The choice I have to make?. Hit the lemming on the phone or go to the pavement to avoid that one but that will harm others. No Brainer... Except I can hit the brakes and horn and just "Hope" that the lemming will get out the way?. Thats why its a no brainer, you aim to stay on the road, dont aim at oncoming transport, and hope the invader under your bumper comes to their senses and changes direction. On being asked what my solution would be to the lemming on the phone, I suggested that perhaps my Horn is louder than the conversation and I would not swing the wheel if that meant collision with innocent bystanders or oncoming traffic, and would aim to stop in a straight line of the carriageway I was in giving as much space as I could hoping they change their mind before I hit them... or at least get out of the way. That kind of worked for me, they gave me a "pass", because although there may not be a perfect answer, I had come close to the best answer they had heard all week. The part of what I was carrying in the back, the "piece of priceless art", I covered that with the fact you should pack certain that you will at lest dynamite the brakes once on any journey and should pack to protect anything against the rigours of usual transport, and that not having protected the priceless art that way, should be a fail before you start. Therein, I do not concern myself with the load at times like that because it should be able to survive that kind of incident. Especially if your braking in a straight line. On a "Numbers" choice, you always aim for the lower numbers...
|
|
|
Post by the light works on Oct 31, 2017 14:20:23 GMT
you are assuming the driverless car has not already spotted the kids, wrestling on the hillside, and calculated the probability of them rolling onto the roadway, and ensured that it will be able to stop in time to avoid them or that it has a safe escape trajectory planned. currently, I like the idea of 100% self-driving freeways, where the self driving cars would allow for a higher net traffic flow, resulting in faster transit from A to B, resulting from the elimination of human behavior. I'm not sure I feel comfortable with the idea of mixed traffic. if I have trouble predicting what the plank next to me is going to do, a computer will have even more trouble. Having worked the problem, I know the complicated code required. The car probably has seen the kids rolling on the hillside, and probably has calculated trajectory, and probability. Then discounted them for now as potential hazard, but, still has them on "watch", in that if the progress within a certain danger zone, it will escalate their standing. This code has been "Lifted" from onboard radar of Fighter jets in how they process "targets" and put precedence towards the most dangerous when working out a firing order for weapons... Except they took out the weaponised bit. The ideas were taken and processed and re-coded for how an onboard radar for Cars works, and then they add in all the other sources of "sight", where "Sight" is any method of sensing, be that Radar, Lidar, optical, IR, UV, all the toys?. But did it see the car door about to be opened, did it see the ball roll, and you know what I keep saying about rolling balls collecting small boys?. In the area of hazard perception, if you watch what a computer spots as potential hazard, it gets close to being exactly the same as what I see, which is more experienced that your average "n00b" driver and therefore [x] number of times more. Except the vehicle can process all of the faster than a human. It may take me "seconds" to scan a new road as I turn onto that road, the computer can do it almost immediately... And if it was human, probably [cr@p] its self on how many hazards there are. But it only has one vehicle. The problem exists if two or more hazards suddenly change trajectory and converge on collision course within the braking distance of the vehicle... which one should it avoid, if it can at all avoid?. and if it cant avoid all collisions, which one to choose?. The current ideal is immediate stop and re-asses until it can be sure there is no danger of collision. But again, my 40 ton Volvo cant stop as quick as say a Bugatti Veyron on ceramic disk at 30 mph. And then, lets complicate things. Inside the vehicle, sat back from any desk, because why not have space, are four or five humans. Sudden de-acceleration to stop from colliding with "hero" lemming on a phone not watching the road, thats going to hurt the passengers?. If they are stupid enough to not wear a seat belt, and there is no effective airbag on board that at this time is dependant on design of having a dashboard in front of the humans, that may be a serious injury from being thrown out the seat. Serious question, as of yet, I have not seen front seats with airbags mounted in the back of those seats to protect rear seat passengers?. I wonder if there is a reason for that?. There will always be accidents and incidents when Humans get Hurt. The ethics problem is which one do you choose. Little old four foot lady or little young 4ft pre-teen?. Back to what the computer sees to what I see. The computer yet has to be able to use Reflected images from Windows, nor can it process shadows, it has yet to recognise a set of feet seen under a van that suggest human at the rear, and as yet, human disappearing down the side of van, will it step out into road, or, because I have seen its wearing the uniform of the firm emblazoned on the side of the van, is it more liable to swing a back door open. The Human mid is amazing, it can do all that and worry whats for lunch at the same time. The choice I have to make?. Hit the lemming on the phone or go to the pavement to avoid that one but that will harm others. No Brainer... Except I can hit the brakes and horn and just "Hope" that the lemming will get out the way?. Thats why its a no brainer, you aim to stay on the road, dont aim at oncoming transport, and hope the invader under your bumper comes to their senses and changes direction. On being asked what my solution would be to the lemming on the phone, I suggested that perhaps my Horn is louder than the conversation and I would not swing the wheel if that meant collision with innocent bystanders or oncoming traffic, and would aim to stop in a straight line of the carriageway I was in giving as much space as I could hoping they change their mind before I hit them... or at least get out of the way. That kind of worked for me, they gave me a "pass", because although there may not be a perfect answer, I had come close to the best answer they had heard all week. The part of what I was carrying in the back, the "piece of priceless art", I covered that with the fact you should pack certain that you will at lest dynamite the brakes once on any journey and should pack to protect anything against the rigours of usual transport, and that not having protected the priceless art that way, should be a fail before you start. Therein, I do not concern myself with the load at times like that because it should be able to survive that kind of incident. Especially if your braking in a straight line. On a "Numbers" choice, you always aim for the lower numbers... the old lady is not so resilient as the teen. therefore a collision is more likely to kill her. as for the passengers, if they isn't belted in why is the car in motion? and if they goes to the effort to persuade the car they is belted in when they isn't, they deserve what they get.
|
|
|
Post by the light works on Oct 31, 2017 17:05:20 GMT
keeping in mind, of course, that robots don't have ethics. they have only comparative values systems programmed by people. so if you programmed it to preserve the resale value of the car at all costs, it would run over a pedestrian to avoid scratching the paint on an errant trash can.
|
|
|
Post by silverdragon on Nov 1, 2017 8:53:19 GMT
The Little Old Lady.... is that the same one that walked to a pedestrian crossing and without hesitation walked right out in front of my car?. She didnt even look either. I could have taken the pre-teen on his bike approaching in the other direction out avoiding her, I could have mounted the pavement and taken out the twenty-something, but instead, I aimed for the gap I had left and hit the horn.... Yes I stopped, with my rear wheels on the crossing, thats how close I was to the crossing to start with, as I hit the brakes as soon as I saw her leave the kerb. Kid on bike and twenty something that was pushing a pram that I hadnt seen yet all ran over, they thought I had hit her. Take the above with a pinch of salt, "Most" of it is true, stopping distance and approaching bike, and me being too close to stop, and the fact I didnt hit anyone. The rest I am elaborating a little to show a point.... If someone by their own fault creates an accident, does it devalue their "score" in the numbers game?. Does "well he started it being a prat" devalue their worth?. should we take in mind that its better to hit them than hit an innocent bystander?. Given the choice, the "I wouldnt say this in a court of law" agreement I have heard so far from other drivers is, if your being a twonka, there is a target on you. If the worst choice has to be made, people will rather happily take out a twit rather than avoid them to hit a bystander/by-driver/by-passenger, or however you term them. So what value age?. If an OAP is a possible choice on the unanswerable question, does it matter is "they start it"?. keeping in mind, of course, that robots don't have ethics. they have only comparative values systems programmed by people. so if you programmed it to preserve the resale value of the car at all costs, it would run over a pedestrian to avoid scratching the paint on an errant trash can. Neither do young children.... they learn ethics from the adults. The ethics a child has comes from its teachings. If you see a computer mind as no more than a young child brain, it can be taught ethics. Thats what this whole question is about. Can that be done?. On the question of preserving the paintwork on the vehicle. Consider the question, small kid on bike planking about at side of road, runs into friend, and both tumble into road. You choices are slim?...... Avoiding kids is a head-but of oncoming traffic. You ran that driving a car. Run it again driving a Bus with two dozen passengers or more on board. Yeah, again, taken from a real life incident... Happily in this case them kids were up and away back to the kerb before I could believe it, and I was only driving a 20ton combination that day, I could have stopped, I may have spilled a couple of ton of dry sand.... But the duties of a Bus driver are as follows. Yourself is always number 1. A Bus in motion with no driver is an accident happening. Second is the passengers on board who's duty of care was accepted as soon as you opened the doors. Third to the vehicle, because its the safety device that keeps the passengers safe. Fourth is "Everyone else", and everything else. In the numbers game we are looking at above, NO ONE is to be considered more important than the passengers of a Bus. If swinging the wheel wildly to avoid an obstacle would unseat a passenger or two that may cause injury, you dont swing that wheel?. Of course if you have the time and space, you do go around, but what if its a case of making "That choice". And thats why the driving test of a bus takes into consideration the comfort of the passengers in your standard of driving, and why I got penalised and failed on my first test for delayed heavy braking, in that driving a bus, you should brake gently at all times, and plan your drive that way. Can you teach this to a Computer?. "At this time", I say probably not, but how soon before that changes?. In my history here as a poster, I entered these boards saying that there was no way in hell I would be caught driving electric. Since that time, a 300 mile range has been announced, 1,000 on a bus, and now recently they have stated a 20 min from empty to full charge rate on the car. Which makes an electric car definitely worth looking into for me. Except, the pollution issues, in that creating that car and scrapping my own would create more pollution than the entire pollution in emissions from the whole of my car's life... which, as at best guess, has at least another 10 yrs service left in it if its treated well?... Technology chages. Fast. And with my senior kid doing a degree in computer science, is about to change a hell of a lot faster than usual if the right company sign him up?. He has learnt a lot from me. When you say something isnt possible, his standard answer will be "why not"?.. Followed by "Whats stopping it?.. how can we get it to be possible..." He just as part of his course designed an "App" that if used in conjunction with a brail label could give an audio description of a product by scanning the bar code under that Braille label with a phone. This is possible useful by Blind or Partially sighted people whop are not sure of what can they have hold of. It will also be possible to put "use by" dates on that app and bar code.?.. This is his course work, and already he is thinking of real world applications that computer science can help with. His younger brother is designing a graphics interface for a game that works on collision detection and probability. You ever done one of those driving games where the computer AI's are so stupid they cant avoid crashing with each other?. Well, he is intent in creating an AI that can avoid everyone else, including YOU, for the advanced settings. For a game. And yeah, this is something that could further collision avoidance in future real life?. This is JUST my own two kids. How many more millions like them are there out there in the world?.
|
|
|
Post by the light works on Nov 1, 2017 13:15:42 GMT
I disagree that computers can be taught ethics, and it is a matter of definitions. an ethic is an emotional decision, and while a computer may be taught to mimic an emotion, we are a long way from it being able to experience one.
and if your old lady was as bad as some of the pedestrians around here, she would have continued across the street until she nearly banged into the side of your trailer and then given you a dirty look for blocking the crosswalk.
|
|
|
Post by silverdragon on Nov 2, 2017 9:41:28 GMT
Ethics are "taught" in that "This would be the right thing to do", so, to an aspergers child, knowing what the right thing to do in that situation would be, is a very black and white decision. If that is possible, then computers can make it a binary decision.
This I am using because the mind of an aspergers person works on very "binary" decisions, they is no grey area, its either right or wrong, and they know "ethics", can understand ethics, and can make exactly the same choices as an average of everyone else.
With that in mind, I ask, can not a binary system computer make the same choices?. Or is it that if it makes the decision "one person" would not make, its to blame for being wrong, when in truth, given an average of a thousand people making that choice, the computer would pick probably the same as the higher average of all those people.
Therefore, is the topic of discussion now, can computers be ethical, or is it that we as humans didnt quite understand the questing on the first place, and, indeed, the correct answer has always been "42"....
You can see where I am going with this.?..
And in truth, who would be more accurate anyway, the aspergers mind of black and white choice making, or the rather befuddles grey area of the "normal" person?.
To err is human, to really foul things up requires the introduction of a committee. Then add on an over-watch foundation. Then add in the overseeing department of paper-clips. Then the "Civilian" committee that shadows all of that. Then add in the Daily Mail readers, "Oh if only Diana were still alive she would know what to do", Add in a stiff letter to the times from confused from milton keynes, and you have the correct procedures for disagreeing with you on everything you do because even if you put the fire out, didnt the fire have a right to exist?.
|
|
|
Post by silverdragon on Nov 2, 2017 9:45:10 GMT
I believe there was a complain from the person that walked into my vehicle in Manchester that time for "Daring" to park up in his path, even though I had been there for quite a while..... That was dismissed by the works manager who replied with "If a parked vehicle in White and Orange is too difficult to see, I suggest you should ask questions of your eyesight, if you wanna take this further, see you in court" type of sarcastic reply that basically said why are you wasting our time dealing with your problem that is your fault and you damaged our paintwork, we could have counter sued for that....
In that what I am trying to say, is your county is no special princess in that department, we have equal number of fools to sensible as a ratio here?..
|
|
|
Post by Lokifan on Nov 4, 2017 15:00:46 GMT
The problem with this dilemma is pretty common in most engineering. It involves two mortal enemies, always at each other's throats:
"Perfect" and "Good Enough".
It will be good enough when the number of accidents caused by the robot equals or is less than the number of accidents caused by a human.
Until then, although we will strive for perfection, we must accept that it's likely impossible.
And before you demand perfection, remember that this is an edge case--one that very seldom may occur. By the time it becomes an issue, a robot will (hopefully) avoid more typical cases (such as drunk driver one car accidents), and for that reason alone the cost in death and pain would likely be much lesser, in all cases.
There will be robot problems in the future. We will find fixes for many of them. Some, we may miss. Some, we may be unable to perfectly fix (the trolley problem).
As for ethics and computers, if it walks like a duck? Yes, the programmer is in charge right now, but that may change with the growth of AI. We just have to keep an eye on it.
Side note: There is a sitcom on Netflix (I think) called "The Good Place". One episode graphically and repeatedly demonstrated the trolley problem. I thought it was very funny, as it involved actually forcing an ethics professor to examine the issue in a realistic manner.
Sometimes there are no right answers, just less wrong ones.
|
|
|
Post by the light works on Nov 4, 2017 20:09:41 GMT
The problem with this dilemma is pretty common in most engineering. It involves two mortal enemies, always at each other's throats: "Perfect" and "Good Enough". It will be good enough when the number of accidents caused by the robot equals or is less than the number of accidents caused by a human. Until then, although we will strive for perfection, we must accept that it's likely impossible. And before you demand perfection, remember that this is an edge case--one that very seldom may occur. By the time it becomes an issue, a robot will (hopefully) avoid more typical cases (such as drunk driver one car accidents), and for that reason alone the cost in death and pain would likely be much lesser, in all cases. There will be robot problems in the future. We will find fixes for many of them. Some, we may miss. Some, we may be unable to perfectly fix (the trolley problem). As for ethics and computers, if it walks like a duck? Yes, the programmer is in charge right now, but that may change with the growth of AI. We just have to keep an eye on it. Side note: There is a sitcom on Netflix (I think) called "The Good Place". One episode graphically and repeatedly demonstrated the trolley problem. I thought it was very funny, as it involved actually forcing an ethics professor to examine the issue in a realistic manner. Sometimes there are no right answers, just less wrong ones. there is a term: "making the perfect the enemy of the good" the question, as people see it is "how do we make the computer make perfect decisions?" when it should actually be, "what priorities do we give the computer to base its decsions on, and how do we remind people that computers make their decisions based on the best information they have available to them, if the outcome is better than a human is likely to achieve, then the computer has done better.
|
|
|
Post by silverdragon on Nov 5, 2017 8:21:16 GMT
Question, a serious question, and its the big one. Are there decisions we should not be allowing computers to make?.
Would you allow a computer to bring up a child?.
No?.. But you let your kids play with Android pads, games machines, phones, watch TeeVee....
How soon will "Robo Doctor" be rolled out for everyone?.
|
|
|
Post by the light works on Nov 5, 2017 13:56:50 GMT
Question, a serious question, and its the big one. Are there decisions we should not be allowing computers to make?. Would you allow a computer to bring up a child?. No?.. But you let your kids play with Android pads, games machines, phones, watch TeeVee.... How soon will "Robo Doctor" be rolled out for everyone?. with some parenting I have seen, I think the computer could do a better job.
|
|
|
Post by silverdragon on Nov 6, 2017 7:54:06 GMT
Question, a serious question, and its the big one. Are there decisions we should not be allowing computers to make?. Would you allow a computer to bring up a child?. No?.. But you let your kids play with Android pads, games machines, phones, watch TeeVee.... How soon will "Robo Doctor" be rolled out for everyone?. with some parenting I have seen, I think the computer could do a better job. Tried to think of something to say, but, two words fit perfect. Your Right. .....aint that sad?.
|
|
|
Post by the light works on Nov 6, 2017 15:00:01 GMT
with some parenting I have seen, I think the computer could do a better job. Tried to think of something to say, but, two words fit perfect. Your Right. .....aint that sad?. yes.
|
|