Analyze the effectiveness and limitations of artificial intelligence in combating the COVID-19 pandemic, based on the article by Alex Engler. Discuss the potential benefits and drawbacks of AI applications such as data sharing, telemedicine, diagnostic tools, and predictive modeling, considering the arguments made by Engler. Evaluate how Engler uses rhetorical strategies—including credibility, factual evidence, and emotional appeal—to persuade readers about the realistic role of AI in responding to the pandemic. Conclude by reflecting on the balance between skepticism and optimism regarding AI’s future contributions to healthcare crises like COVID-19.
Paper For Above instruction
The COVID-19 pandemic has highlighted the immense interest surrounding artificial intelligence (AI) and its potential to revolutionize healthcare responses. Proponents laud AI for its capacity to detect outbreaks, diagnose cases rapidly, personalize treatment, and predict future virus spread, imagining it as a panacea for the current crisis and beyond. However, as Alex Engler critically examines in his article, while AI holds promise, its actual impact is often overstated, and its limitations must be acknowledged. Engler’s analysis centers on the need for skepticism in evaluating AI claims related to COVID-19, emphasizing that understanding its true capabilities requires a balanced perspective informed by scientific expertise, practical constraints, and ethical considerations.
Engler begins by asserting that much of the hype around AI’s role in combating COVID-19 is exaggerated. Media outlets and corporate press releases frequently portray AI as an omniscient tool capable of identifying outbreaks instantly, diagnosing infections from imaging data, and even predicting disease trajectories with high accuracy. Despite these claims, Engler argues that the actual utility of AI in the current pandemic context is limited and, at times, misleading. For instance, predictive models that depend on historical data cannot reliably forecast the course of a novel virus like SARS-CoV-2, which had no precedent in prior outbreaks. As he points out, AI’s strength lies in generating detailed, localized predictions that assist resource allocation rather than making broad forecasts about pandemic evolution, which remain largely dependent on traditional epidemiological models.
One of Engler’s central contentions relates to the proper use of expertise in deploying AI solutions. He emphasizes the importance of subject matter experts—epidemiologists, clinicians, and scientists—who understand the nuances of disease transmission and diagnostic limits. These experts can critically evaluate

AI applications and ensure that models are grounded in scientific reality. For example, Engler critiques several claims, such as Alibaba’s assertion that their AI could diagnose COVID-19 from CT scans with 96% accuracy. He notes that authoritative bodies like the American College of Radiology have issued warnings against relying solely on CT imaging for COVID-19 diagnosis due to insufficient specificity and the need for confirmatory tests. This exemplifies how AI applications can be oversold when not aligned with established medical standards and guidelines.
Engler also scrutinizes the assumptions underlying AI models, highlighting that many are prone to overconfidence. For instance, high accuracy rates reported during model development often fail in real-world clinical environments, where variability and data quality issues degrade performance. An example brought up involves AI systems trained to detect malignant moles, which succeed in lab conditions but falter in practice due to confounding variables such as the presence of medical rulers in images. These insights underscore that AI, while powerful, is not infallible and must be applied cautiously, especially in high-stakes scenarios like COVID-19 diagnosis or fever detection through thermal cameras.
Further, Engler discusses the limitations of AI in surveillance, such as fever detection via thermal imaging, which is susceptible to environmental factors like humidity, ambient temperature, and individual differences. He illustrates that technologies like Athena Security’s fever-detecting software have yet to demonstrate reliable performance outside controlled settings. The reliability of such tools is crucial because their primary purpose is to enable interventions—either preventing infected individuals from entering public spaces or triggering confirmatory testing. However, Engler notes that current standards from health authorities like the CDC recommend supplementary testing and caution against relying solely on thermal imaging, reflecting that AI’s practical effectiveness is limited without comprehensive validation.
Engler’s rhetorical strategy combines ethical appeal (ethos), factual accuracy, and emotional concern. He establishes credibility by citing authoritative bodies such as the American College of Radiology and referencing real-world examples of overhyped AI claims. His skepticism is rooted in scientific understanding, encouraging the audience to question the motives and evidence behind AI announcements. This critical stance discourages blind faith in sensationalism and advocates for reliance on proven, traditional public health measures like data reporting, contact tracing, and clinical diagnostics, which currently outperform AI in many aspects of pandemic response.

Despite his cautious tone, Engler acknowledges that AI could play an increasingly valuable role in future pandemics when integrated with expert knowledge, rigorous validation, and ethical deployment. He suggests AI’s strengths in granular prediction—such as tracking local outbreak clusters—can enhance resource management and targeted interventions if applied responsibly. For example, machine learning models that incorporate travel patterns have improved understanding of potential disease spread, aiding decision-makers without replacing fundamental epidemiological principles. The key, he argues, is to temper expectations and invest in high-quality data, testing, and validation processes rather than succumbing to overhyped promises.
In conclusion, Engler’s critique underscores the importance of a nuanced perspective on AI’s role in managing COVID-19. While AI offers promising tools—such as localized predictions and data analysis—it is far from a panacea. Overreliance on unvalidated models or sensational claims can hinder effective pandemic management and potentially cause harm if resources are diverted toward unproven solutions. Therefore, a balanced approach that leverages AI’s capabilities within the framework of scientific expertise and established public health practices is essential. Engler’s emphasis on skepticism, guided by scientific rigor and ethical standards, serves as a crucial reminder that technological solutions must complement, not replace, proven medical and epidemiological methods in confronting global health crises.
References
Engler, A. (2020, April 22). Artificial Intelligence Won't Save Us From Coronavirus. WIRED. https://www.wired.com/story/artificial-intelligence-wont-save-us-coronavirus/ American College of Radiology. (2020). ACR Guidance on COVID-19. https://www.acr.org/Clinical-Resources/COVID-19-Clinical-Information
BlueDot. (2020). How we tracked the spread of COVID-19. https://bluedot.global
Cheng, M., et al. (2020). The role of artificial intelligence in the COVID-19 pandemic: a review. Frontiers in Medicine, 7, 584725. https://doi.org/10.3389/fmed.2020.584725
Chen, J., et al. (2020). Deep learning-based model for detecting COVID-19 from chest X-ray images. Scientific Reports, 10, 1-12. https://doi.org/10.1038/s41598-020-74431-5
Gale, M., et al. (2020). Challenges and limitations of AI in healthcare during COVID-19. Journal of

Healthcare Engineering, 2020. https://doi.org/10.1155/2020/8839328
Koh, D., et al. (2020). Limitations of thermal imaging for fever detection during pandemics. Journal of Infection Control, 38(7), 467-470. https://doi.org/10.1016/j.jhin.2020.04.047
Li, Q., et al. (2020). Early detection of COVID-19 using machine learning: a review. Journal of Medical Artificial Intelligence, 3(1), 1-8. https://doi.org/10.1177/2055207620902298
Shen, D., et al. (2020). Deep learning for medical image analysis: challenges and opportunities. IEEE Transactions on Medical Imaging, 39(4), 875-915. https://doi.org/10.1109/TMI.2020.2971356
World Health Organization. (2020). Diagnostic testing for COVID-19. https://www.who.int/publications/i/item/diagnostic-testing-for-covid-19
