New and surprising evidence that ChatGPT can perform several intricate tasks relevant to handling complex medical and clinical information PLOSDigiHealth harvardmed BrownUniversity ChatGPT medical AI clinical medicine
By Neha MathurFeb 13 2023Reviewed by Danielle Ellis, B.Sc. In a recent study published in PLOS Digital Health, researchers evaluated the performance of an artificial intelligence model named ChatGPT to perform clinical reasoning on the United States Medical Licensing Exam .
One of the main reasons for this is the shortage of domain-specific training data. Large general domain models are now enabling image-based AI in clinical imaging. It has led to the development of Inception-V3, a top medical imaging model that spans domains from ophthalmology and pathology to dermatology.
Regarding the choice of the USMLE as a substrate for ChatGPT testing, the researchers found it linguistically and conceptually rich. The test contained multifaceted clinical data used to generate ambiguous medical scenarios with differential diagnoses. Furthermore, the researchers assessed the prevalence of insight within AI-generated explanations to quantify the density of insight . The high frequency and moderate DOI indicated that it might be possible for a medical student to achieve some knowledge from the AI output, especially when answering incorrectly. DOI indicated the uniqueness, novelty, nonobviousness, and validity of insights provided for more than three out of five answer choices.
Another reason why the performance of ChatGPT was more impressive is that prior models most likely had ingested many of the inputs while training, while it had not. Note that the researchers tested ChatGPT against more contemporary USMLE exams that became publicly available in the year 2022 only). However, they had trained other domain-specific language models, e.g., PubMedGPT and BioBERT, on the MedQA-USMLE dataset, publically available since 2009.
In ~90% of outputs, ChatGPT-generated responses also offered significant insight, valuable to medical students. It showed the partial ability to extract nonobvious and novel concepts that might provide qualitative gains for human medical education. As a substitute for the metric of usefulness in the human learning process, ChatGPT responses were also highly concordant.
Deutschland Neuesten Nachrichten, Deutschland Schlagzeilen
Similar News:Sie können auch ähnliche Nachrichten wie diese lesen, die wir aus anderen Nachrichtenquellen gesammelt haben.
We asked ChatGPT 10 questions about F1 and the answers were scarily accurateArtificial intelligence is playing an increasingly large part in our world, so we thought we'd test ChatGPT's F1 knowledge, because why not?
Weiterlesen »
Artificial intelligence stocks soar on ChatGPT hype\n\t\t\tKeep abreast of significant corporate, financial and political developments around the world.\n\t\t\tStay informed and spot emerging risks and opportunities with independent global reporting, expert\n\t\t\tcommentary and analysis you can trust.\n\t\t
Weiterlesen »
Is Google’s 20-year dominance of search in peril?ChatGPT’s creator, OpenAI, has teamed up with Microsoft, which is covetously eyeing Google’s gleaming pool of profits
Weiterlesen »
The impressiveness of ChatGPT cannot be overestimated – it is already changing how I teachThe impressiveness of Chat GPT cannot be underestimated - it is already changing how I teach ✒️ stefanohat for ipaperviews
Weiterlesen »
We asked ChatGPT to write a story about who wins Super Bowl LVIIThe story was one of thrilling encounter between two evenly matched teams that went right down to the wire
Weiterlesen »
Microsoft wants to make Google dance in battle for AI searchPlus: Publisher using AI tools generated false health advice for men; how ChatGPT widens economic inequalities
Weiterlesen »