Artificial intelligence in healthcare
YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching) https://www.finance-monthly.com/2025/03/ai-sdr-breakthrough-in-sales-through-intelligent-automation-and-analytics/. The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government. The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem .
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.
The application of AI in medicine and medical research has the potential to increase patient care and quality of life. Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.
Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.
Artificial intelligence in healthcare
Algorithms based on flawed or limited data could trigger prejudices of racial, cultural, gender, or social status . For instance, previous studies reported that pre-existing and new forms of biases and discrimination against underrepresented groups could be worsened in the absence of responsible AI tools. We found that the public is concerned about the potential of AI tools discriminating against specific groups. For instance, such fears were raised in a study that assessed consecutive patients for suspected Acute Coronary Syndrome. Similar concerns were found in a study that determined the impact of AI on public health practices , and another study that explored the views of patients about various AI applications in healthcare . Thus, the public strongly advocates for effective human oversight and governance to deflate potential excesses of AI tools during patient care . Thus, we believe that algorithms employed by AI tools should not absolve medics and their facilities from responsibility. We further contend that until the necessary steps are taken, AI usage in healthcare could undermine the SDG 11.7, for universal access to safe, inclusive, and accessible public spaces for all by 2030 . The evidence is that patients and the public are generally aware of bias and discriminatory services at many medical facilities . Therefore, the fear that AI tools could be deliberately setup to provide biased and racialised care that may compromise rather than improve health outcomes .
Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, et al. Attitudes and perception of artificial intelligence in healthcare: a cross-sectional survey among patients. Digit Health. 2022.
Algorithms based on flawed or limited data could trigger prejudices of racial, cultural, gender, or social status . For instance, previous studies reported that pre-existing and new forms of biases and discrimination against underrepresented groups could be worsened in the absence of responsible AI tools. We found that the public is concerned about the potential of AI tools discriminating against specific groups. For instance, such fears were raised in a study that assessed consecutive patients for suspected Acute Coronary Syndrome. Similar concerns were found in a study that determined the impact of AI on public health practices , and another study that explored the views of patients about various AI applications in healthcare . Thus, the public strongly advocates for effective human oversight and governance to deflate potential excesses of AI tools during patient care . Thus, we believe that algorithms employed by AI tools should not absolve medics and their facilities from responsibility. We further contend that until the necessary steps are taken, AI usage in healthcare could undermine the SDG 11.7, for universal access to safe, inclusive, and accessible public spaces for all by 2030 . The evidence is that patients and the public are generally aware of bias and discriminatory services at many medical facilities . Therefore, the fear that AI tools could be deliberately setup to provide biased and racialised care that may compromise rather than improve health outcomes .
Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, et al. Attitudes and perception of artificial intelligence in healthcare: a cross-sectional survey among patients. Digit Health. 2022.
It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.
In healthcare, the most common application of traditional machine learning is precision medicine – predicting what treatment protocols are likely to succeed on a patient based on various patient attributes and the treatment context.2 The great majority of machine learning and precision medicine applications require a training dataset for which the outcome variable (eg onset of disease) is known; this is called supervised learning.
Artificial intelligence stocks
Despite the recent earnings dip, AMD stock has performed strongly over the last year and the last decade. It has averaged returns of 48.3%. It has a “B” financial health rating and a buyback yield of 0.4%.
NVDA is the best-performing AI stock over the past year. Earnings per share, or EPS, had a big leap higher in 2023, and analysts project strong EPS growth going forward. It has the highest forecasted 5-year EPS growth on the list.
Longer term, Morgan Stanley MS researchers believe AI-driven improvement in digital experiences will push more consumers to spend online vs. offline. Offline spending is currently worth an estimated $6 trillion. That’s a sizable opportunity for the right businesses and their investors.
Despite the recent earnings dip, AMD stock has performed strongly over the last year and the last decade. It has averaged returns of 48.3%. It has a “B” financial health rating and a buyback yield of 0.4%.
NVDA is the best-performing AI stock over the past year. Earnings per share, or EPS, had a big leap higher in 2023, and analysts project strong EPS growth going forward. It has the highest forecasted 5-year EPS growth on the list.
Longer term, Morgan Stanley MS researchers believe AI-driven improvement in digital experiences will push more consumers to spend online vs. offline. Offline spending is currently worth an estimated $6 trillion. That’s a sizable opportunity for the right businesses and their investors.