Trust is built on social norms and basic predictability. AI is typically not designed with either
The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.
Many AI systems are built on deep learning neural networks, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it “learns” how to classify the data by adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before.
In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust. The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the AI alignment problem, and it’s another source of uncertainty that erects barriers to trust.
Deutschland Neuesten Nachrichten, Deutschland Schlagzeilen
Similar News:Sie können auch ähnliche Nachrichten wie diese lesen, die wir aus anderen Nachrichtenquellen gesammelt haben.
Your Weekly Tarot Horoscope Says a Secret Enemy May Be Plotting Against YouBe careful who you trust.
Weiterlesen »
Grayscale submits SEC filing to convert Ethereum Trust to spot ETFGrayscale submits SEC filing to convert Ethereum Trust to spot ETF
Weiterlesen »
The Role of the Therapist in Psychedelic TherapyThe importance of trust.
Weiterlesen »
Just 24 Things With Rave Reviews On Amazon You'll Want To Buy For Your BedroomTrust us — or the hundreds of thousands of five-star reviews.
Weiterlesen »