Health



The US Food and Drug Administration has approved the first continuous glucose monitor (CGM) people can buy without a prescription. Dexcom's Stelo Glucose Biosensor System has a sensor users are meant to insert into their upper arm, similar to the company's other CGMs that need a doctor's prescription for purchase. It pairs with a smartphone application that can show the user's blood glucose measurements and trends every 15 minutes. 

The company designed the device specifically for adults 18 and up who are not using insulin, such as those managing their diabetes with oral medications and non-diabetics making a conscious effort to control their sugar intake. It could be a great tool for people with insulin resistance, including individuals with PCOS and other metabolic issues that heighten their probability of developing diabetes in the future. In general, it could give users the insight to be able to better understand how the food they eat and the movements they make impact their overall health. 

While CGMs aren't anything new, they've become a wellness trend on social media last year, and even non-diabetics started using them. By clearing Stelo, the FDA is making the monitors more accessible than before. "CGMs can be a powerful tool to help monitor blood glucose," said Jeff Shuren, MD, director of the FDA's Center for Devices and Radiological Health. "Today's clearance expands access to these devices by allowing individuals to purchase a CGM without the involvement of a health care provide. Giving more individuals valuable information about their health, regardless of their access to a doctor or health insurance, is an important step forward in advancing health equity for U.S. patients."

Stelo will be available starting this summer. Each patch is meant to last for 15 days before users will need to replace it. Dexcom has yet to reveal how much it would cost, but it said Stelo will "provide an option for those who do not have insurance coverage for CGM."

A gray circular device.
Dexcom

This article originally appeared on Engadget at https://www.engadget.com/fda-approves-the-first-over-the-counter-continuous-glucose-monitor-130008629.html?src=rss

FDA approves the first over-the-counter continuous glucose monitor



There’s a reason smartwatches haven’t replaced clinically validated gear when you visit the hospital — accuracy and reliability are paramount when the data informs medical procedures. Even so, researchers are looking for ways in which these devices can...

Dr. Garmin will see you now




The “P” in HIPAA doesn’t stand for privacy. It’s one of the first things a lot of experts will say when asked to clear up any misconceptions about the health data law. Instead, it stands for portability — it’s called the Health Insurance Portability and Accountability Act —and describes how information can be transferred between providers. With misinterpretations of HIPAA starting with just its name, misunderstandings of what the law actually does greatly impact our ability to recognize how the kinds of data do and don't fall under its scope. That’s especially true as a growing number of consumer tech devices and services gather troves of information related to our health.

We often consider HIPAA a piece of consumer data privacy legislation because it did direct the Department of Health and Human Services to come up with certain security provisions, like breach notification regulations and a health privacy rule for protecting individually identifiable information. But when HIPAA went into effect in the 1990s, its primary aim was improving how providers worked with insurance companies. Put simply, “people think HIPAA covers more than it actually does,” said Daniel Solove, professor at George Washington University and CEO of privacy training firm TeachPrivacy.

HIPAA has two big restrictions in scope: a limited set of covered entities, and limited set of covered data, according to Cobun Zweifel-Keegan, DC managing director of the International Association of Privacy Professionals. Covered entities include healthcare providers like doctors and health plans like health insurance companies. The covered data refers to medical records and other individually identifiable health information used by those covered entities. Under HIPAA, your general practitioner can't sell data related to your vaccination status to an ad firm, but a fitness app (which wouldn't be a covered entity) that tracks your steps and heart rate (which aren't considered covered data) absolutely can.

“What HIPAA covers, is information that relates to health care or payment for health care, and sort of any piece of identifiable information that’s in that file,” Solove said. It doesn’t cover any health information shared with your employer or school, like if you turn in a sick note, but it does protect your doctor from sharing more details about your diagnosis if they call to verify.

A lot has changed in the nearly 30 years since HIPAA went into effect, though. The legislators behind HIPAA didn’t anticipate how much data we would be sharing about ourselves today, much of which can be considered personally identifiable. So, that information doesn’t fall under its scope. “When HIPAA was designed, nobody really anticipated what the world was going to look like,” Lee Tien, senior staff attorney at the Electronic Frontier Foundation said. It’s not badly designed, HIPAA just can’t keep up with the state we’re in today. “You're sharing data all the time with other people who are not doctors or who are not the insurance company,” said Tien.

Think of all the data collected about us on the daily that could provide insight into our health. Noom tracks your diet. Peloton knows your activity levels. Calm sees you when you’re sleeping. Medisafe knows your pill schedule. Betterhelp knows what mental health conditions you might have, and less than a year ago was banned by the FTC from disclosing that information to advertisers. The list goes on, and much of it can be used to sell dietary supplements or sleep aids or whatever else. “Health data could be almost limitless,” so if HIPAA didn’t have a limited scope of covered entities, the law would be limitless, too, Solove said.

Not to mention the amount of inferences that firms can make about our health based on other data. An infamous 2012 New York Times investigation detailed how just by someone’s online searches and purchases, Target can figure out that they’re pregnant. HIPAA may not protect your medical information from being viewed by law enforcement officers. Even without a warrant, cops can get your records just by saying that you’re a suspect (or victim) of a crime. Police have used pharmacies to gather medical data about suspects, but other types of data like location information can provide sensitive details, too. For example, it can show that you went to a specific clinic to receive care. Because of these inferences, laws like HIPAA won’t necessarily stop law enforcement from prosecuting someone based on their healthcare decision.

Today, state-specific laws crop up across the US to help target some of the health data privacy gaps that HIPAA doesn’t cover. This means going beyond just medical files and healthcare providers to encompass more of people’s health data footprint. It varies between states, like in California which provides options to charge anyone who negligently discloses medical information or some additional breach protections for consumers based in Pennsylvania, but Washington state recently passed a law specifically targeting HIPAA’s gaps.

Washington State’s My Health My Data Act, passed last year, aims to “protect personal health data that falls outside the ambit of the Health Insurance Portability and Accountability Act,” according to a press release from Washington’s Office of the Attorney General. Any entity that conducts business in the state of Washington and deals with personal information that identifies a consumer’s past, present or future physical or mental health status must comply with the act’s privacy protections. Those provisions include the right not to have your health data sold without your permission and having health data deleted via written request. Under this law, unlike HIPAA, an app tracking someone’s drug dosage and schedule or the inferences made by Target about pregnancy would be covered.

My Health My Data is still rolling out, so we’ll have to wait and see how the law impacts national health data privacy protections. Still, it’s already sparking copycat laws in states like Vermont.

This article originally appeared on Engadget at https://www.engadget.com/hipaa-protects-health-data-privacy-but-not-in-the-ways-most-people-think-184026402.html?src=rss

HIPAA protects health data privacy, but not in the ways ...




If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.

Risk mitigation isn’t a sexy notion but it’s worth understanding how common it is for a new drug project to fail. To set the scene, consider that each drug project takes between three and five years to form a hypothesis strong enough to start tests in a laboratory. A 2022 study from Professor Duxin Sun found that 90 percent of clinical drug development fails, with each project costing more than $2 billion. And that number doesn’t even include compounds found to be unworkable at the preclinical stage. Put simply, every successful drug has to prop up at least $18 billion waste generated by its unsuccessful siblings, which all but guarantees that less lucrative cures for rarer conditions aren’t given as much focus as they may need.

Dr. Nicola Richmond is VP of AI at Benevolent, a biotech company using AI in its drug discovery process. She explained the classical system tasks researchers to find, for example, a misbehaving protein – the cause of disease – and then find a molecule that could make it behave. Once they've found one, they need to get that molecule into a form a patient can take, and then test if it’s both safe and effective. The journey to clinical trials on a living human patient takes years, and it’s often only then researchers find out that what worked in theory does not work in practice.

The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space. He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”

Image of Human HT29 Cells which are highlighted in Cell Profiler, the Carpenter-Singh software platform used to examine cellular images.
CellProfiler / Carpenter-Singh laboratory at the Broad Institute

Dr. Anne E. Carpenter is the founder of the Carpenter-Singh laboratory at the Broad Institute of MIT and Harvard. She has spent more than a decade developing techniques in Cell Painting, a way to highlight elements in cells, with dyes, to make them readable by a computer. She is also the co-developer of Cell Profiler, a platform enabling researchers to use AI to scrub through vast troves of images of those dyed cells. Combined, this work makes it easy for a machine to see how cells change when they are impacted by the presence of disease or a treatment. And by looking at every part of the cell holistically – a discipline known as “omics” – there are greater opportunities for making the sort of connections that AI systems excel at.

Using pictures as a way of identifying potential cures seems a little left-field, since how things look don’t always represent how things actually are, right? Carpenter said humans have always made subconscious assumptions about medical status from sight alone. She explained most people may conclude someone may have a chromosomal issue just by looking at their face. And professional clinicians can identify a number of disorders by sight alone purely as a consequence of their experience. She added that if you took a picture of everyone’s face in a given population, a computer would be able to identify patterns and sort them based on common features.

This logic applies to the pictures of cells, where it’s possible for a digital pathologist to compare images from healthy and diseased samples. If a human can do it, then it should be faster and easier to employ a computer to spot these differences in scale so long as it’s accurate. “You allow this data to self-assemble into groups and now [you’re] starting to see patterns,” she explained, “when we treat [cells] with 100,000 different compounds, one by one, we can say ‘here’s two chemicals that look really similar to each other.’” And this looking really similar to each other isn’t just coincidence, but seems to be indicative of how they behave.

In one example, Carpenter cited that two different compounds could produce similar effects in a cell, and by extension could be used to treat the same condition. If so, then it may be that one of the two – which may not have been intended for this purpose – has fewer harmful side effects. Then there’s the potential benefit of being able to identify something that we didn’t know was affected by disease. “It allows us to say, ‘hey, there’s this cluster of six genes, five of which are really well known to be part of this pathway, but the sixth one, we didn’t know what it did, but now we have a strong clue it’s involved in the same biological process.” “Maybe those other five genes, for whatever reason, aren’t great direct targets themselves, maybe the chemicals don’t bind,” she said, “but the sixth one [could be] really great for that.”

A male in his 30s of Indian ethnicity, working in a scientific laboratory searching for a vaccine for COVID-19.
FatCamera via Getty Images

In this context, the startups using AI in their drug discovery processes are hoping that they can find the diamonds hiding in plain sight. Dr. Richmond said that Benevolent’s approach is for the team to pick a disease of interest and then formulate a biological question around it. So, at the start of one project, the team might wonder if there are ways to treat ALS by enhancing, or fixing, the way a cell’s own housekeeping system works. (To be clear, this is a purely hypothetical example supplied by Dr. Richmond.)

That question is then run through Benevolent’s AI models, which pull together data from a wide variety of sources. They then produce a ranked list of potential answers to the question, which can include novel compounds, or existing drugs that could be adapted to suit. The data then goes to a researcher, who can examine what, if any, weight to give to its findings. Dr. Richmond added that the model has to provide evidence from existing literature or sources to support its findings even if its picks are out of left-field. And that, at all times, a human has the final say on what of its results should be pursued and how vigorously.

It’s a similar situation at Recursion, with Dr. Gibson claiming that its model is now capable of predicting “how any drug will interact with any disease without having to physically test it.” The model has now formed around three trillion predictions connecting potential problems to their potential solutions based on the data it has already absorbed and simulated. Gibson said that the process at the company now resembles a web search: Researchers sit down at a terminal, “type in a gene associated with breast cancer and [the system] populates all the other genes and compounds that [it believes are] related.”

“What gets exciting,” said Dr. Gibson, “is when [we] see a gene nobody has ever heard of in the list, which feels like novel biology because the world has no idea it exists.” Once a target has been identified and the findings checked by a human, the data will be passed to Recursion’s in-house scientific laboratory. Here, researchers will run initial experiments to see if what was found in the simulation can be replicated in the real world. Dr. Gibson said that Recursion’s wet lab, which uses large-scale automation, is capable of running more than two million experiments in a working week.

“About six weeks later, with very little human intervention, we’ll get the results,” said Dr. Gibson and, if successful, it’s then the team will “really start investing.” Because, until this point, the short period of validation work has cost the company “very little money and time to get.” The promise is that, rather than a three-year preclinical phase, that whole process can be crunched down to a few database searches, some oversight and then a few weeks of ex vivo testing to confirm if the system’s hunches are worth making a real effort to interrogate. Dr. Gibson said that it believes it has taken a “year’s worth of animal model work and [compressed] it, in many cases, to two months.”

Of course, there is not yet a concrete success story, no wonder cure that any company in this space can point to as a validation of the approach. But Recursion can cite one real-world example of how close its platform came to matching the success of a critical study. In April 2020, Recursion ran the COVID-19 sequence through its system to look at potential treatments. It examined both FDA-approved drugs and candidates in late-stage clinical trials. The system produced a list of nine potential candidates which would need further analysis, eight of which it would later be proved to be correct. It also said that Hydroxychloroquine and Ivermectin, both much-ballyhooed in the earliest days of the pandemic, would flop.

And there are AI-informed drugs that are currently undergoing real-world clinical trials right now. Recursion is pointing to five projects currently finishing their stage one (tests in healthy patients), or entering stage two (trials in people with the rare diseases in question) clinical testing right now. Benevolent has started a stage one trial of BEN-8744, a treatment for ulcerative colitis that may help with other inflammatory bowel disorders. And BEN-8744 is targeting an inhibitor that has no prior associations in the existing research which, if successful, will add weight to the idea that AIs can spot the connections humans have missed. Of course, we can’t make any conclusions until at least early next year when the results of those initial tests will be released.

DNA molecular structure with sequencing data of human genome analysis on black background.
Yuichiro Chino via Getty Images

There are plenty of unanswered questions, including how much we should rely upon AI as the sole arbiter of the drug discovery pipeline. There are also questions around the quality of the training data and the biases in the wider sources more generally. Dr. Richmond highlighted the issues around biases in genetic data sources both in terms of the homogeneity of cell cultures and how those tests are carried out. Similarly, Dr. Carpenter said the results of her most recent project, the publicly available JUMP-Cell Painting project, were based on cells from a single participant. “We picked it with good reason, but it’s still one human and one cell type from that one human.” In an ideal world, she’d have a far broader range of participants and cell types, but the issues right now center on funding and time, or more appropriately, their absence.

But, for now, all we can do is await the results of these early trials and hope that they bear fruit. Like every other potential application of AI, its value will rest largely in its ability to improve the quality of the work – or, more likely, improve the bottom line for the business in question. If AI can make the savings attractive enough, however, then maybe those diseases which are not likely to make back the investment demands under the current system may stand a chance. It could all collapse in a puff of hype, or it may offer real hope to families struggling for help while dealing with a rare disorder.

This article originally appeared on Engadget at https://www.engadget.com/ai-is-coming-for-big-pharma-150045224.html?src=rss

AI is coming for big pharma





Researchers at MIT’s CSAIL division, which focuses on computer engineering and AI development, built two machine learning algorithms that can detect pancreatic cancer at a higher threshold than current diagnostic standards. The two models together formed to create the “PRISM” neural network. It is designed to specifically detect pancreatic ductal adenocarcinoma (PDAC), the most prevalent form of pancreatic cancer.

The current standard PDAC screening criteria catches about 10 percent of cases in patients examined by professionals. In comparison, MIT’s PRISM was able to identify PDAC cases 35 percent of the time.

While using AI in the field of diagnostics is not an entirely new feat, MIT’s PRISM stands out because of how it was developed. The neural network was programmed based on access to diverse sets of real electronic health records from health institutions across the US. It was fed the data of over 5 million patient’s electronic health records, which researchers from the team said “surpassed the scale” of information fed to an AI model in this particular area of research. “The model uses routine clinical and lab data to make its predictions, and the diversity of the US population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions like a few healthcare centers in the US,” Kai Jia, MIT CSAIL PhD senior author of the paper said.

MIT’s PRISM project started over six years ago. The motivation behind developing an algorithm that can detect PDAC early has a lot to do with the fact that most patients get diagnosed in the later stages of the cancer’s development — specifically about eighty percent are diagnosed far too late.

The AI works by analyzing patient demographics, previous diagnoses, current and previous medications in care plans and lab results. Collectively, the model works to predict the probability of cancer by analyzing electronic health record data in tandem with things like a patient’s age and certain risk factors evident in their lifestyle. Still, PRISM is still only able to help diagnose as many patients at the rate the AI can reach the masses. At the moment, the technology is bound to MIT labs and select patients in the US. The logistical challenge of scaling the AI will involve feeding the algorithm more diverse data sets and perhaps even global health profiles to increase accessibility.

Nonetheless, this isn't MIT’s first stab at developing an AI model that can predict cancer risk. It notably developed a way to train models how to predict the risk of breast cancer among women using mammogram records. In that line of research, MIT experts confirmed, the more diverse the data sets, the better the AI gets at diagnosing cancers across diverse races and populations. The continued development of AI models that can predict cancer probability will not only improve outcomes for patients if malignancy is identified earlier, it will also lessen the workload of overworked medical professionals. The market for AI in diagnostics is so ripe for change that it is piquing the interest of big tech commercial companies like IBM, which attempted to create an AI program that can detect breast cancer a year in advance.

This article originally appeared on Engadget at https://www.engadget.com/mit-experts-develop-ai-models-that-can-detect-pancreatic-cancer-early-222505781.html?src=rss

MIT experts develop AI models that can detect pancreatic cancer ...