A truly planetary politics would extend decisionmaking to animals, ecosystems, and potentially AI.
A few remarkable examples: Red deer, who live in large herds and frequently stop to rest and ruminate, will start to move off from a rest area once 60 percent of adults stand up; they literally vote with their feet. The same goes for buffalo, although the signs are more subtle: The female members of the herd indicate their preferred direction of travel by standing up, staring in one direction, and lying down again. Birds, too, display complex decisionmaking behavior.
One approach is to adjust our existing legal structures to better accommodate them. Today, efforts are underway to give nonhumans legal personhood, which entails the right to speak and be heard as individuals before our courts. If nonhumans were considered as legal persons, then courts could recognize them as having their own inalienable rights and deserving of both protection and self-determination.
In some other countries, legal personhood has already been granted to nonhuman entities. India’s courts, for example, have extended personhood not only to animals but to the Ganges River. The river has its own “right to life,” argued the lawyers in the case. This ruling is particularly interesting, because when activists come to the defense of a natural entity such as a river they usually have to prove that its degradation is a risk to human life: This is how anthropocentrism plays out in law.
Deutschland Neuesten Nachrichten, Deutschland Schlagzeilen
Similar News:Sie können auch ähnliche Nachrichten wie diese lesen, die wir aus anderen Nachrichtenquellen gesammelt haben.
AI Ethics Shocking Revelation That Training AI To Be Toxic Or Biased Might Be Beneficial, Including For Those Autonomous Self-Driving CarsOne controversial posture by AI Ethics is that we can purposely devise toxic AI or biased AI in order to ferret out and cope with other toxic AI. As they say, sometimes it takes one to know one. This includes self-driving cars too.
Weiterlesen »
Using Explainable AI in Decision-Making Applications | HackerNoonHere we explore the essence of explainability in AI and analyzing how applies to decision support systems in healthcare, finance, and other different industries
Weiterlesen »
No, Google's AI is not sentient: Tech company shuts down engineer's claim of program's consciousnessMany in the AI community pointed out that his tale highlights how the technology can lead people to assign human attributes to it.
Weiterlesen »
LaMDA and the Sentient AI TrapArguments over whether Google’s large language model has a soul distract from the real-world problems that plague artificial intelligence.
Weiterlesen »
Taking a Closer Look at AI and DiagnosesNew systems can scan images to detect disease at ever-earlier stages and forecast illnesses to come. But they would require an overhaul in medical decision-making. Learn more about it in one of our latest articles.
Weiterlesen »
Google wants to challenge AI with 200 tasks to replace the Turing testAlan Turing first proposed a test for machine intelligence in 1950, but now researchers at Google and their partners have created a suite of 204 tests to replace it, covering subjects such as mathematics, linguistics and chess
Weiterlesen »