One would need real time human voice, in any voice, any languages, any where, wifi, secure, lowest cost, easy to use, any dialect, cross languages and cross family of languages, that is scalable, with less than 22 seconds of data, on any equipment, with a 15 year lead over anyone, (particularly fixing failed big data collection DeepMind, Alexa, Siri, Cortona, Skype, Facebook, Bixby, Watson, Google Voice and Translate, Apple, all illegally recording without authorization and evesdropping , including bedrooms and bathrooms), which only exists from Speech Morphing, Inc., featured on August 6rh, 2019, at cmsvoice.com
Artificial Intelligence AI, is a great technology. No technology is full proof. For every technology there’s always an accepted error margins. We can never equate machines to humans, because it’s we humans that create these machines.The most intelligent human from common knowledge does not use up to 10% of his brain to think, invent and profer/progress solutions to life’s problems or call it human problems/complications. My perception of AI is that humans are trying to task individual brains via research to exceed the 10% brain power allowed us by nature. These machines can only perform at the level of individual researchers brain power. The machines can only perform according to the data configuration of its creation by this human brain (the reseacher). Self recognition by the machine is certainly a great break through in AI, but my question is, can the machine’s mind be compared to the human mind, (the researcher’s mind)? Certainly a machine is a machine. Therefore the type of autonomy we give a machine for it’s stand alone action must be clearly defined, to avoid catastrophic/collateral damages that could occur.
Agreeing with John R. AIs at the moment are reinforced by their environment and by reinforcement training. They believe as we do because they take their examples from us. I’m sure there are algorithms being developed that let the algorithm itself choose what information to pay attention to, but for now most of them only care about the data we tell it to. They can’t widen their scope without instruction, and most are highly specialized to look for one pattern or achieve one task. I think that’s the root of many fears; not that the AI will be alien but that it will be a reflection of us. That the algorithm can’t be blamed because it was trained to do what it did and that a human (maybe not all or even most, but a human) would do what it did because it observed that a human either had or said they would.
There’s also the fear of mistakes – ML algorithms are still very shaky on recognition and depending on how you set the thresholds, there’s a strong chance that they’ll ID the wrong thing. Even if that chance is lower than a human’s, I think people would prefer the human pulling the trigger because at least there’s someone to blame in that case. A face to associate with the mistake. That’s understandable. We learn to trust humans to make these decisions, but machines are supposed to be an extension of us, not an agency unto itself. When an AI is allowed to kill, how much longer before we start having to accord it the same considerations we extend to other humans? We can’t even get along with each other much less another sapient agency.
Anyone who has had a problem with the services of any large tech company can tell you how braindead their algorithms can be. I still haven’t figured out how to kill an old instagram account of mine whose password I have forgotten. I go to their “customer service” and I’m led into an infinite loop where I always wind up back right where I was before. And I had have frustrating customer service experiences with many tech companies. Their algorithms are rigid and profoundly stupid. And there is often no appeal.
We all know about the google algorithm that was identifying images of African Americans as gorillas. And is it increasingly clear that the algorithms used by lenders, penal institutions (to predict things like recidivism or the likelihood that someone will jump bail), and others just codify and strengthen bigotries and biases of our society as a whole, sometimes even creating positive feedback loops that put bigotry on steroids.
It is the height of foolishness to give these algorithms the power of life and death.
The reason artificial intelligence works as well as it does is because it is NOT an army of one – instead, it ponders multiple options with combinations of predictive factors that an army of homo sapiens could not possibly consider. The best AI has multiple “decision-makers” running in the background weighing the odds obscured in algorithms like neural nets. One of the short-comings of AI, as it is practiced, is it does not give good explanations for how it arrived at its conclusion. Artificial Intelligence, correctly practiced, is not a black-box and should make people smarter. On the other hand, humans have their own built in biases and many people in the practice have seen humans reject results simply because they don’t like the results. Or they half-like the results so they half-practice the recommendations resulting in disaster – but that is another story.
And now we are talking about Space Force! We seriously need a mindset change.
Comments are closed.