News headlines are talking about a study in China that purportedly used AI-based facial recognition and brainwave detection to aid in detecting Chinese Communist Party (CCP) loyalty, which smacks of considerable AI Ethics qualms and could have consequences globally about oppressive uses of AI.
Apparently, some “volunteers” were recruited to participate in an experiment regarding perceptions of the CCP. Whether they were willing volunteers or more akin to goaded or maybe guided volunteers is unknown. We will assume for sake of discussion that they were agreeable to being a subject in the study.
The CCP-focused study seemingly had the subjects sit in front of a kiosk-like video display and read various articles about the policies and accomplishments of the CCP. This is probably the considered “experimental treatment” that the subjects are being exposed to. When planning out an experiment, you usually come up with an experimental factor or aspect that you want to see if it impacts the participants.
How might we detect whether the subjects in this experiment responded to or altered their impressions as a result of reading the displayed materials?You would have beforehand administered perhaps a questionnaire that asks them their impressions of the CCP. Then, following the exposure to the experimental treatment, as in the reading of materials being displayed, we could administer another questionnaire. The answers given by the subjects on a before and after basis might then be compared.
Various reporting has indicated that the study stated this about the nature of the experiment: “On one hand, it can judge how party members have accepted thought and political education.” And the study supposedly also mentioned this too: “On the other hand, it will provide real data for thought and political education so it can be improved and enriched.” The research study was attributed to being performed under the auspices of China’s Hefei Comprehensive National Science Centre.
The Twitter reaction substantially decried that the very notion of using AI-empowered brainwave scans and facial recognition is by itself an appalling and outrageous act. Only human monsters would use those kinds of devices, we are told by some of those tweets.
Deutschland Neuesten Nachrichten, Deutschland Schlagzeilen
Similar News:Sie können auch ähnliche Nachrichten wie diese lesen, die wir aus anderen Nachrichtenquellen gesammelt haben.
FIFA will track players’ bodies using AI to make offside calls at 2022 World CupAnother step towards automated referees
Weiterlesen »
'Fair' AI could help redress bias against Black US homebuyersPioneering reparations programmes meant to address decades of US housing discrimination against Black homebuyers could get a boost from AI decision making
Weiterlesen »
Val Kilmer's Dialogue in 'Top Gun: Maverick' Was Read by AI Because He Can't Speak AnymoreOne of the most moving scenes in 'Top Gun: Maverick' involves an AI voice speaking in place of leading man Val Kilmer, who can no longer speak.
Weiterlesen »
DeepMind’s AI develops popular policy for distributing public moneyDeepMind researchers have trained an AI system to find a popular policy for distributing public funds in an online game – but they also warn against “AI government”
Weiterlesen »
AI Seems to Be Better at Distributing Wealth Than Humans Are, Study HintsArtificial intelligence (AI) can devise methods of wealth distribution that are more popular than systems designed by people, new research suggests.
Weiterlesen »
Researchers use AI to predict crime, biased policing in major U.S. cities like L.A.A group of social scientists out of Chicago has developed a machine learning model it says not only predicts crime a week in advance, but has also uncovered biased policing in eight major U.S. cities, including Los Angeles.
Weiterlesen »