Transcendent man documentary
CANADIAN PREPPERS STORE (BEST PRICES ON PREMIUM GEAR)
Bugout Rolls & Backpacking Systems
Emergency Radios
Freeze Dried Food (Long lasting survival food)
Personal Protective Equipment
First Aid Kits
Shelter and Sleep Systems
Water Filtration
Cooking Systems
Silky Saws
Flashlights & Navigation
Survival Gear/ Misc
Fire Starting
Hygiene
_
OFFICIAL FACEBOOK PAGE HERE!
Survival and Prepping T-shirts
Support the channel on Patreon
Donate to the channel through paypal button
PLAYLISTS
AFTER THE COLLAPSE SERIES!
AMAZING GEAR REVIEWS!
SURVIVAL FITNESS!
INTERVIEWS WITH YOUTUBE PREPPERS!
ALL AMERICAN PREPPER
OPINIONS AND SOCIAL CRITIQUE
WINTER/ CANADIAN SURVIVAL
SURVIVAL PSYCHOLOGY
Apparently there has already been a MULTI=fATALITY accident in JAPAN where A.I. security robots wide out a team of scientists, like 10 people got snuffed, they are keeping a TIGHT LID on the story . so at this time it is only a rumour that has high potential as real deal..BUT regardless, this scenario where people will be snuffed by programmed machines is going to become common-place IMO…great video brother!~ Keep up the awesome work!
I’ve seen shows about what AI will be in the future. To me it’s creepy and scary, and I’m glad I’m an old lady.
If AI ever becomes “self aware ” in the human sense it may see mankind as a threat and seek to defend itself/ themselves from us.
Good stuff mate. looking forward to upcoming content. Cheers for sharing.
What depiction of AI do you think is more accurate: one that acts like a psychopath or one that acts autistic?
I’ve seen some very good TED talks on that matter and Person of Interest explores that matter too.
However, there’s one thing I haven’t seen discussed: the *user interface*
The intelligence discussed in this video requires *self-lerning programs.* They are IMHO amazing, but they are *blackboxes.* For instance, when we give a machine a couple of images depicting humans to learn the machine to correctly name objects (in this case humans), we cannot look at the resulting algorithm that allows identification of humans (which is a complex task: single out from the environment; take persepective and conformation into account…). This becomes a problem, when the algorithm works most of the times, but makes specific mistakes, e.g. confusing black people with monkeys (something that actually happened because the scientists forgot to feed the machine with examples of black people) and *you cannot just correct the error.*
There’s a second problem related to it: there will be a time where the problems become so complex that we no longer program how the data output will look like and *machines will have to be able to talk to us* on their own rather than just spouting out numbers, single words or specific file formats. You might think of google-translate and how it makes some funny translations and that should give you an impression of the issue. *Google translate doesn’t understand how humans think or how languages actually work it just works based on heuristics AFAIK.* Now imagine the computer having to tell us how to solve our global warming issue…. it would be like a dog trying to explain his perception of smells to us, or we trying to explain the concept of colors to a blind man…
Lastly, I like to mention that even a clear cut question can have multiple correct answers and that at times morals will be important:
a) As a autopilot in a scenario of an impending accident: how to decide what damage control should look like?
b) Humans are not as rational as they fancy themselves and do things that harm themselves or their interests. If you ask an AI on what the best way is to kill yourself or get away with murder, should it really obey or inform others, even if that’s what you strictly told the machine not to do?
c) In general, what is the greater good? When should the machine take sides? Is it ok to sacrifice 99% of the world population, if that solves 99.99% of the world problems and would allow the survivors to live a better life?
Humans are flawed and AI will be our demise.. When Humans interfere anything mistakes are made and those flaws will destroy us. I am not a fan AI. The pace of development should be slowed.