“The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.”
Meet Dave, Sophia and AL. In 2017 I gave D-Wave Quantum computing a name knowing nothing of Sophia the Arobot. I named it Dave because there were entities coming through at CERN because of the big black box called D-Wave Quantum computer. Honestly, if there was no Dave there would be no Sophia. So in 2018 we have the rise of Artificial Intelligence and I think it needs a name too. So I’m going call it AL, you know like the Paul Simon song. This is based off of the Algorithims that allow AI to think on its own. With all that said, let’s meet AL.
A recent article by Vijay Pande called ‘Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear’. In the opening paragraphs he says this. “Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s fear about the way the technology works. A recent MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Instituterecommended that public agencies responsible for criminal justice, health care, welfare and education shouldn’t use such technology.
Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.” (https://mobile.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html?referer=http://www.google.com/)
First off I had never heard the term black box used for what data goes in and what answers come out in terms of AI. It sounds like the sorry of brain where this algorithim works for this AI. Now most every one knows by now what a black box represents. The black cube worship of Saturn. Some even believe that this black cube or box is within Saturn itself and this is where fallen angels are being kept in chains until they are released by GOD. Its an interesting concept and although its not biblical, it makes a lot of sense. So its interesting to know that the information of AI goes in and out of a theoretical black box. I always thought it was the cloud but hey, maybe this “entity” that gives info through the box is connected to Saturn. That would not surprise me at all.
Now he says in the first paragraph this. “No one really knows how the most advanced algorithms do what they do. That could be a problem.” These guys made these algorithms that have now, in their opinion, have gotten a little out of control. Just ask Mr. Mars himself, Elon Musk. They’re main problem now is not that these AI’s, or lets just say AL, are thinking for themselves but they now want them to be held accountable. So read this. “Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.” (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/)
So they want AI to be held accountable. That’s because it can think for itself and if you want to be able to trust it as a human, you’ll need to know why its telling you to do the things you do. I mean, if you can’t decide something like where to eat for yourself, just ask AI. But your going to need to know why your eating there. That’s seriously an insane concept but its not new. In James Patterson’s book The Store you see close to this same idea. “Imagine a future of unparalleled convenience. A powerful retailer, The Store, can deliver anything to your door, anticipating the needs and desires you didn’t even know you had.” Not only is their a silent accountability, its deciding for you based on “trust”. In November of 2017 MIT wrote an article on this accountability. “AI Can Be Made Legally Accountable for Its Decisions. Computer scientists, cognitive scientists, and legal scholars say AI systems should be able to explain their decisions without revealing all their secrets.”
“There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”
Philosopher Daniel Dennett says this. “He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.” (https://www.technologyreview.com/s/609495/ai-can-be-made-legally-accountable-for-its-decisions/)
So this is all a little crazy for me. AI needs to be held accountable? That means its going to think for itself and do things that we may not necessarily like and understand. What a future we have, to never think for ourselves again and like it. I’ll leave you with this…
“Perhaps the real source of critics’ concerns isn’t that we can’t “see” A.I.’s reasoning but that as A.I. gets more powerful, the human mind becomes the limiting factor. It’s that in the future, we’ll need A.I. to understand A.I.” (https://mobile.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html?referer=http://www.google.com/)