AI is rapidly evolving into a collaborative tool that will enhance human capabilities, allowing us to focus on more complex or personal goals rather than replacing human labor. Although AI is undeniably a powerful tool, it ultimately depends on human guidance to provide context and direction toward achieving meaningful goals. The Stanford AI Index Report is an annual publication that provides an overview of AI trends and projections, offering a deeper understanding of this technology. The report covers a broad range of topics including productivity, bias, global opinion, and regulation related to AI. According to the report, AI has improved professional productivity by 32.81%, along with efficiency (24.96%) and faster learning times (25.17%). A cited Microsoft study, Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity, compared task completion times between two groups: 1 using an AI tool (Copilot), and another without any AI assistance. The study found that users that used AI were able to complete tasks 26% to 73% faster than the group not using the tool. A similar cited study, Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality, found that people using AI tools also had a 40% increase in quality of work.

However, like any other tool, users need to know how to correctly use AI to achieve similar results. AI suffers from several major issues that users must be aware of. The first issue is inherent bias due to the training data used to develop these models. A prominent example cited by the AI Index Report, More Human Than Human: Measuring ChatGPT Political Bias., studied the political bias of AI tools. The study revealed a strong positive correlation between the default outputs of certain AI tools (notably ChatGPT) and radical Democratic views, while showing a strong negative correlation with radical Republican views. This bias, which extends to other areas of LLM’s training data, largely stems from an unbalanced dataset during the training phase. This creates uncertainty when using LLM’s because users are unable to ensure that the AI was trained without bias across all topics, not just politics. Such bias can reduce the range of perspectives available, affecting the accuracy and fairness of AI outputs.
The second issue is large language models (LLMs) hallucinate, or provide misinformation, in about 19.5% of their outputs according to the AI benchmark HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models.. Although the exact reasons behind these hallucinations are not yet fully understood, multiple companies are actively working to solve and mitigate this problem. These issues are particularly problematic in high-stakes areas such as medicine, where critical decisions rely on accurate, reliable, and unbiased information. This phenomenon was even noted by Dr. Adam Rodman in his efforts to study AI’s impact on the medical field and was one of the reasons he was originally skeptical of the technology. Together, these observations suggest that AI is unlikely to replace human workers; rather, serve as a tool that requires careful human oversight and consideration before being applied towards important tasks.
Although AI tools are being designed with good intentions, like most tools, there are malicious actors looking to use these tools with ill intent. According to the Stanford AI Index Report there were approximately 123 incidents of AI misuse in 2023. In January of 2024, sexually explicit, AI-generated content, known as Sexual Deepfakes, were created and shared of Taylor Swift on X. With cases like this it seems obvious why a 2023 Ipsos survey reported that 52% of the global population have nervous feelings about AI. Instances like this prove that although these tools are powerful, there should be some sort of regulation, or safeguards implemented in these systems.

As AI tools begin to expand, government officials have been debating and implementing laws to safeguard the use of AI. According to a multistate.ai article, Most States Have Enacted Sexual Deepfake Laws., there are currently 45 states discussing legislation on AI Deepfakes. There are also current laws such as the Jobs of the Future Act, House Bill 4498., that were introduced in 2023 to assess AI’s effects on workers’ skills and its potential to enhance work performance while possibly expanding the workforce. This legislative effort underscores that even the government views AI as a tool to supercharge human productivity, not merely as a replacement for human labor.
A key example demonstrating how AI could be used as a tool was published in the 2024 Autum edition of the Harvard Medicine Magazine. The Article Can AI Make Medicine More Human? by Adam Rodman provides an excellent perspective into how AI could be an instrumental tool for medical professionals that augments their abilities, similar to the stethoscope. Dr. Rodman highlighted that Doctors’ work has been reduced to interacting with patient charts, scans, and other medical paperwork rather than the human aspects of their career, such as interacting with patients. AI could supercharge medical professionals’ work by providing comprehensive diagnosis based on the Doctors’ thoughts when interacting with the patient, and trends found when analyzing patient paperwork. Dr. Rodman ends his paper by saying he could see a future where AI is integrated into the infrastructure of healthcare. AI could provide feedback, recommendations, and diagnosis that can better the quality of care, while also increasing Doctor and patient confidence.
As AI development continues to push the boundaries of what we thought was possible, it’s easy to imagine a world where it benefits everyone. My favorite example of what the future could hold comes from pop culture. In the Marvel universe, Tony Stark created what he called, Just A Rather Very Intelligent System aka J.A.R.V.I.S. This system assists Tony with a myriad of tasks, but how feasible is this idea? Well thanks to advancements in Machine Learning, today there are programs that can provide similar features that were originally considered a form of science fiction.
Machine Learning allows computers to learn from input data and act on it without explicit instructions. With this ability programs are being developed that can automate your daily routines to help expedite your workflow, along with the other previously mentioned benefits of AI. Open AI has recently released Operator, a tool designed to autonomously perform web-based tasks for users by mimicking human actions online. This option, however, can be expensive, costing users $200/month to use. For users with advanced enough hardware, there are open-source options such as Browser Use, that operate in a similar way. NetworkChuck did a comparison video called ChatGPT Operator is expensive….use this instead where he compared both programs and found they both provided a similar experience for the user. These programs could simplify everyday workflows by analyzing user data such as emails and calendars to assist users by performing meticulous tasks on their behalf. This would free up the user’s work schedule by allowing AI to do things that don’t require human cognition, such as responding to emails or updating calendars, and inform users of important tasks that require their attention.
After my deep dive into the future of artificial intelligence, I was encouraged to learn that AI systems are being developed to work collaboratively with human counterparts to enhance and optimize workflows, not replace human labor. The future of this technology is promising if people continue to use it as a tool by remembering its limitations and guiding its development toward meaningful goals that avoid unethical use cases. With the possible benefits seen within the medical field, and the simplification of monotonous daily tasks, I disagree with the majority’s perspective on AI. With these findings, AI will most likely continue to expand into almost every profession to increase productivity, efficiency, and quality of work across all industries.
Works Cited
“AI Index Report 2024.” Stanford Institute for Human-Centered Artificial Intelligence, Stanford University, Apr. 2024, https://hai.stanford.edu/ai-index/2024-ai-index-report
“AI and Productivity Report: First Edition.” Microsoft Research, 2023, https://www.microsoft.com/en-us/research/wp-content/uploads/2023/12/AI-and-Productivity-Report-First-Edition.pdf
“House Bill 4498.” Congress.gov, 118th Congress, U.S. House of Representatives, https://www.congress.gov/bill/118th-congress/house-bill/4498
Ipsos. Ipsos Global AI 2023 Report. Ipsos, July 2023, www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report-WEB_0.pdf
Li, Junyi, et al. “HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models.” arXiv preprint arXiv:2305.11747, May 2023, https://arxiv.org/pdf/2305.11747
Motoki, F., Pinho Neto, V., & Rodrigues, V. “More Human Than Human: Measuring ChatGPT Political Bias.” Public Choice, vol. 198, 2024, pp. 389–411, https://doi.org/10.1007/s11127-023-01097-2
MultiState Associates. “Most States Have Enacted Sexual Deepfake Laws.” MultiState Associates, 28 June 2024, www.multistate.ai/updates/vol-32
“Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality” Harvard Business School Faculty & Research, Harvard Business School, 2023 https://www.hbs.edu/faculty/Pages/item.aspx?num=64700
NetworkChuck. (2025, Feb. 21). ChatGPT Operator is expensive….use this instead (FREE + Open Source) [Video]. YouTube. https://www.youtube.com/watch?v=sxTNACldK3Y Rodman, Adam. “Can AI Make Medicine More Human?” *Harvard Medicine Magazine*, Autumn 2024, https://magazine.hms.harvard.edu/articles/can-ai-make-medicine-more-human
Leave a Reply