- Supermedic.ai
- Posts
- Google DeepMind introduces AlphaFold 3🧬
Google DeepMind introduces AlphaFold 3🧬
ALSO: GPT-4 now has vision: but can it read chest x-rays? AI-powered low-cost MRI delivers high-quality results, AI predicts suicidal thoughts and behaviors with 92% accuracy
Hey!
Welcome to this week’s edition of Supermedic, where we explore the latest developments in artificial intelligence and its transformative impact on healthcare.
Let’s get into it!
Victor
TODAY’S MENU
Google DeepMind Introduces AlphaFold 3
Can AI Second Opinions Enhance Clinical Decisions?
AI Predicts Suicidal Thoughts and Behaviors with 92% Accuracy
AI-Powered Low-Cost MRI Delivers High-Quality Results
GPT-4 Now Has Vision: But Can It Read Chest X-Rays?
Read time: under 5 minutes
FUTURE IS NOW
Google DeepMind Introduces AlphaFold 3
Google DeepMind and Isomorphic Labs just unveiled AlphaFold 3, an AI model that can predict the structures of proteins, DNA, RNA, ions, and other small molecules.
This marks a significant advancement from previous versions, which focused solely on protein structure prediction.
Key Highlights:
AlphaFold 3 demonstrates a 50% improvement in prediction accuracy compared to its predecessors.
The model utilizes a diffusion method, similar to AI image generators (Dall-E, Midjourney), to create 3D models of molecular structures.
Isomorphic Labs, a drug discovery company founded by DeepMind CEO Demis Hassabis, is already using AlphaFold 3 with pharmaceutical partners to design new drugs.
Implications for Research and Medicine:
AlphaFold 3's ability to model a wide range of biomolecules has the potential to revolutionize various fields, including medicine, materials science, and drug development. By providing researchers with a powerful tool to test potential discoveries, this AI model could accelerate scientific breakthroughs and lead to treatments for diseases like cancers, Alzheimer's or Parkinson's.
Get Access to AlphaFold 3
DeepMind has just released the AlphaFold Server, which makes AlphaFold 3 available via the cloud for outside researchers to access for free. However, the source code for AlphaFold 3 has not been released to the scientific community.
THEY SUPPORT US ❤️
Learn AI in 5 Minutes a Day
AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.
Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.
OPINION
Can AI-based Second Opinion Service Improve Clinical Decisions?
Doctors are only human, and humans make mistakes - lots of them. In fact, misdiagnoses harm nearly 800,000 Americans every year. But what if we could enlist the help of an AI assistant to double-check physicians work?
That's the idea behind using large language models like ChatGPT as a "second opinion" service in hospitals. Here's how it would work: a doctor summarizes a patient's case and submits it to the AI, which then processes the entire medical record and offers potential diagnoses, blind spots, and treatment options. A human expert reviews the AI's recommendations before sending them back to the treating physician.
Sure, AI has its flaws - biases, hallucinations, you name it. But with a human safeguard in place, an AI second opinion could catch costly errors that our fallible human brains routinely miss. And unlike pricey human consultants, running these models costs pennies.
Of course, doctors don't have to follow the AI's advice. But simply considering other perspectives could prevent tragedies caused by anchoring bias, premature closure, and other cognitive pitfalls. When it comes to life-or-death decisions, a little AI input might be just what the doctor ordered.
PSYCHOLOGY
AI Predicts Suicidal Thoughts and Behaviors with 92% Accuracy
Could a quick survey and picture-rating task be the key to identifying students at risk of suicide?
Researchers have developed an AI tool that was up to 92% accurate at predicting suicidal thoughts and behaviors using basic demographic info, short mental health surveys, and a simple test where patients rate images as positive or negative.
With student suicides a tragic issue worldwide, the hope is this tool could be turned into an app for schools and families to identify those most urgently needing help. While not a silver bullet, it's an intriguing example of AI being used to tackle a critical real-world problem.
RADIOLOGY
AI-Powered Low-Cost MRI Delivers High-Quality Results
Researchers at the University of Hong Kong have developed a low-cost MRI machine that, when paired with AI, can produce high-quality images comparable to those from conventional MRI scanners. This breakthrough could significantly improve access to life-saving diagnostic tools in developing countries.
Key Points:
The low-cost MRI uses off-the-shelf parts and costs around $22,000, a fraction of the price of conventional MRI machines.
It requires less power and does not need expensive radio frequency shielding, making it more accessible in low-resource settings.
A deep learning algorithm compensates for the reduced image detail and higher levels of radio interference.
In tests on 30 healthy adult volunteers, the "ultra low field" MRI matched the performance of conventional MRI scanners 60x more powerful. This development paves the way for affordable, patient-centric, and AI-powered MRI scanners that can address unmet clinical needs in diverse healthcare settings worldwide.
However, challenges remain, such as developing local production and maintenance skills and retraining radiologists to interpret the images correctly.
CHATBOT
GPT-4 Now Has Vision: But Can It Read Chest X-Rays?
A recent study published in Radiology has tested GPT-4 with vision (GPT-4V) on its ability to interpret chest radiographs, revealing that the AI model’s visual recognition capabilities are not yet up to clinical standards.
Key Findings:
GPT-4V achieved a Positive Predictive Value (PPV) of less than 25% when attempting to identify image findings from a set of 100 chest x-rays.
The model performed slightly better when provided with example images to learn from; however, the improvement was modest.
GPT-4V was most accurate in identifying chest drains, airspace disease, and lung opacity. It struggled with detecting endotracheal tubes, central venous catheters, and degenerative changes in osseous structures.
Data Limitations
While GPT-4V shows potential for automated image-text pair generation, the study underscores the need for further development and assessment before the model can be integrated into clinical practice. Researchers advocate for the use of larger, more diverse datasets that include real-world images from various modalities to fully train and validate these AI models for medical use.
Thanks for reading!
Let’s improve the newsletter together!🌟
What did you think about this weekly edition?Take a second to let us know by clicking your answer below. |
ChatGPT Fails at Heart Risk Assessment
Reply