Tag Archives: ChatGPT

Experts offer advice to new college grads on entering the workforce in the age of AI

New college graduates this year face an especially daunting task — putting their degrees to work just as “generative” artificial intelligence technology like ChatGPT is beginning to change the American workplace. 

“We are entering an entirely new economy, so the knowledge economy that we have been in for the last 50 years or so is on the way out, and a new economy is on the way in,” Aneesh Raman, Chief Economic Opportunity Officer at LinkedIn, told CBS MoneyWatch.

The impact of AI on Americans recently out of college is already visible across a range of industries and jobs, from technology and finance to media, legal fields and market research. As a result, for the first time unemployment among fresh grads recently surpassed the nation’s overall jobless rate — a shift some experts attribute in part to the creeping influence of AI.

“There are signs that entry-level positions are being displaced by artificial intelligence at higher rates than the roles above them,” said Matthew Martin, senior U.S. economist at Oxford Economics.

With the adoption of AI at work only expected to accelerate, we asked three experts across academia, recruitment and consulting for advice on how new college grads should navigate this new normal. Here’s what they said.

Become fluent in AI 

Perhaps most important, young job-seekers start using gen-AI tools — today. 

“Almost anybody in that audience, irrespective of the job that they’re pursuing, will be expected to use AI with some facility right away,” said Joseph Fuller, a professor at Harvard Business School and founder of the Managing the Future of Work project, comparing the task to learning how to use Microsoft Office for a previous generation of grads.

To get the ball rolling, experts encourage those who are starting to hunt for work to familiarize themselves with the array of tools at their disposal, such as Anthropic’s Claude or OpenAI’s ChatGPT. That means learning how to engage with such tools beyond simply using them as a search engine. 

“You want to get in a dialogue with it,” Fuller said. “You want to ask it to take different perspectives.”

Emily Rose McRae, an analyst at research and advisory firm Gartner, said learning how to use AI apps can also be a good way to develop transferable skills. For example, asking AI to summarize documents and then validating its findings to ensure accuracy. 

Meanwhile, although AI can be helpful when it comes time to filling out job applications, users should proceed with caution given that recruiters can often spot AI-generated language, experts note. Nearly two-thirds of job candidates today use AI at some point in the application process, according to a report from recruitment firm Career Group Companies.

“If you’re using it to write your cover letter and your resume and you did not review it, everyone can tell,” McRae said.

Another way to gain potentially valuable experience with AI, while also seeking work, is for interview practice. For example, users can ask the chatbot both to provide sample questions they might face in an interview and then rate the quality of their responses. 

“If you are using it as a tool to get your own understanding of self in interviews, you’re going to start being leaps ahead of everyone else,” Raman said.

Hone your soft skills

Experts say that as AI surpasses humans in executing certain tasks — think actuarial math or corporate compliance, for example —more attention will shift to job candidates’ so-called soft skills, such as problem solving and communication. 

“You cannot outsource your thinking to AI,” LinkedIn’s Raman said. “You have to continue to hone critical thinking and complex strategy thinking.”

The focus will be less on your pedigree — where you went to school or even whether you have a college degree — he added, and more on what he calls the “5 Cs”: curiosity, compassion, creativity, courage and communication. 

To improve their soft skills, Fuller encourages entry-level job candidates to work on turning what they regard as their biggest weakness into a strength. For instance, if you typically shy away from public speaking or talking in groups, push yourself to get comfortable in those situations. 

“The inability to do that is going to be penalized more severely in the work of the future than it has been in the past,” he said.

The Harvard professor also suggested highlighting examples of advanced social skills directly on your resume to help paint a picture for recruiters of how you can contribute to the workplace.

Choose your employer wisely

Beyond skills development, experts say college grads should be thoughtful about the type of company they choose to work at, knowing that AI could drastically alter the business in the coming years.

“The most important thing, if you’re a new grad, is where you work — not what you do at the place you’re going to work,” Raman told CBS MoneyWatch.

He encouraged college graduates to seek out employers that are integrating AI responsibly and with respect for their workforce — as opposed to embracing it chiefly to replace people. Companies that are adapting to what is a major technological shift in real time will typically offer the best opportunities for learning and growth, Fuller said. 

In evaluating a prospective employer, young job candidates should try to gain an understanding of how they fit into the company’s future. For example, McRae recommends asking hiring managers up front what types of investments the organization is making in its employees and what the room for growth looks like. 

“What are they telling me they care about? What do career paths look like for this role like now? How do you help people develop the skills they need to become experts?” she said.

In researching companies, McRae also encouraged recent college grads to look for places that offer apprenticeship or rotational programs, which can offer ways to quickly ramp up their knowledge base, especially if traditional entry-level roles are in short supply.

Source link

Your Brain Wasn’t Built for This. Algorithms Know It.

As algorithms guide our every move, the line between convenience and control gets blurrier. Unsplash+

In March 2021, a driver in Charlton, Massachusetts, plunged his car into Buffumville Lake while following GPS directions. Rescue teams were called to recover the completely submerged vehicle from 8 feet of water. The driver thankfully escaped with just a few minor injuries. When asked why he drove into a lake despite being able to see the water ahead, his answer was simple: the GPS told him to go that way. We’ve all heard these stories, and let’s face it, they sound ridiculous. But here’s the thing: We are all somewhere on this spectrum of conveniently handing over decisions to our friendly bots.

The Silent Surrender of Decision-Making

As a society that prizes autonomy and independence, it’s surprising that we’ve gradually outsourced more of our decision-making to algorithms, often without even realizing it. What began with navigation has now expanded into nearly every aspect of our lives. We defer to recommendation engines for what to watch, read, eat and believe. We consult A.I. for career advice rather than developing our own criteria for meaningful work. We ask chatbots about relationship compatibility instead of honing our emotional intelligence. It’s hard to see where the machines end and our minds begin. Digital tech is now a literal extension of our minds, and we urgently need to treat it as such.

The convenience is undeniable. Why struggle with choices when an algorithm can analyze thousands of variables in milliseconds? Why develop your own expertise when you have access to a myriad of geniuses in your pocket? But this convenience comes with a subtle cost: our agency as human beings. We think of technology as supercharging what we want to do anyway, but there’s a thin line between facilitating what we want and manipulating it. The way this happens is often so subtle that we barely notice it happening, explains Karen Yeung, a scholar researching what she calls “hypernudging,” or how A.I. shapes our preferences.

Music streaming services don’t merely serve what you like; they gradually shift your musical taste toward more commercially viable artists by controlling your exposure. News aggregators don’t just deliver information; they subtly emphasize certain perspectives, slowly molding your political opinions. Media theorist Marshall Mcluhan recognized this dynamic decades ago when he shared the observation that first we shape our tools and then our tools shape us. Today’s algorithms don’t just respond to our choices; they actively and intimately shape them.

Living in Narrowing Information Landscapes

Any skilled delegator will tell you that one of the most satisfying things about outsourcing decisions is that it frees up the mind. And it’s true. If Sunday mornings are always pancakes, you don’t have to think (or negotiate!) with anyone about what’s for breakfast. The problem is, when we delegate our primary information feeds—news, search and social media—it starts narrowing down our core understanding of reality. To some extent, this is essential for our sanity, as there is simply too much information to process. But what happens to our ideas, motivations and actions when what we perceive as the world—our reality—is increasingly limited?

Multiple algorithmic effects are at play simultaneously. Despite the illusion of infinite choice, our information landscape narrows through personal filtering and cultural homogenization, leaving us with increasingly limited perspectives. In Filterworld, Kyle Chayka explains how algorithms have flattened culture by rewarding certain engagement patterns. Content creators worldwide chase similar algorithmic rewards, producing remarkably similar outputs to maximize visibility. TikTok-optimized homes, Instagram-friendly cafés and Spotify-formatted songs are all designed to perform well within algorithmic systems.

This one’s for Algorithm Daddy!” explains actress and activist Jameela Jamil, as she posts a selfie in a revealing dress—a gamified move she feels she has to make whenever she starts noticing the algorithms suppressing her more substantive social justice content. Cultural diversity suffers similarly because content not in English is less likely to be included in A.I. training data.

Hundreds of people interviewed described this paradoxical feeling: overwhelmed by choice and a bit suffocated by algorithmic recommendations. “There are endless options on Netflix,” one executive said, “but I can’t find anything good to watch.” How can we make truly informed choices when our information diet is so tightly curated and narrowed?

Our Gradual Brain Atrophy

famous study of London taxi drivers showed that their hippocampi—the brain regions responsible for spatial navigation—grew larger as they memorized the city’s labyrinthine streets. Thanks to neuroplasticity, our brains constantly change based on how we use them. And it works both ways: when we stop navigating using our senses, we lose the capacity to do so. For example, when we rely on A.I. for research, we don’t develop the core skills to connect ideas. When we accept A.I. summaries without checking sources, we delegate credibility evaluation and weaken our critical thinking. When we let algorithms curate our music, we atrophy our ability to develop personal taste. When we follow automated fitness recommendations rather than listening to our bodies, we diminish our intuitive understanding of our physical needs. When we let predictive text complete our thoughts, we start to forget how to express ourselves precisely.

In The Shallows, Nicholas Carr explores how our brains physically change in response to internet use, developing neural pathways that excel at rapid skimming but atrophy our capacity for sustained attention and deep reading. The philosopher-mechanic Matthew Crawford offers a compelling antidote in Shop Class as Soulcraft, arguing that working with physical objects—fixing motorcycles or building furniture—provides a form of mental engagement increasingly rare and precious in our digital economy. These are important and tangible trade-offs that fundamentally change us, and while they may seem inevitable in our digital worlds, recognizing how we’re shaped by every tool we use is the first step toward becoming more aware and intentional about technology.

Reclaiming Our Algorithmic Agency

The good news is that there are ways to regain control and maintain human agency in our digital lives. First, recognize that defaults are deliberate choices made by companies, not neutral starting points. Research consistently shows that people rarely change default settings. Did you know you can view Instagram posts chronologically rather than by algorithm-determined “relevance”? How many people utilize ChatGPT’s customization features? These options often exist for power users but remain largely unused by most. It’s not just about digging into the settings; it’s a mindset. Each time we accept a default setting, we surrender a choice. With repetition, this creates a form of learned helplessness—we begin to believe we have no control over our technological experiences.

Second, consider periodic “algorithm resets.” Log out, clear your data or use private browsing modes. While it’s convenient to stay logged in, this convenience comes at the cost of increasingly narrow personalization. When shopping, consider the privacy implications of centralizing all purchases through a single platform that builds comprehensive profiles of your behavior. Amazon Fresh, anyone? Third, regulatory frameworks that protect cognitive liberty should be supported. As A.I. is able to read thoughts and manipulate them, Professor Nita Farahany is among those making the case for a new human rights framework around the commodification of brain data. If not, Farahany believes that “our freedom of thought, access and control over our own brains, and our mental privacy will be threatened.”

The algorithmic revolution promises unprecedented benefits. But many of these threaten to come at the cost of our agency and cognitive independence. By making conscious and intentional choices about when to follow algorithmic guidance and when to do our own thing, we can stay connected to what we value most in each situation. Perhaps the most important skill we can develop these days is knowing when to trust the machine and when to trust our own eyes, instincts, and judgment. So the next time a bot tries to steer you into a metaphorical lake, please remember you’re still the driver. And you’ve got options.

Menka Sanghvi is a mindfulness and digital habits expert based in London. She is a globally acclaimed author and speaker on attention, tech and society. Her latest book is Your Best Digital Life – Use Your Mind to Tame Your Tech (Macmillan, 2025). Her Substack newsletter, Trying Not To Be A Bot, explores our evolving relationship with A.I.



Source link

Medical school fully embraces AI tools for students



Medical school fully embraces AI tools for students – CBS News










































Watch CBS News



Artificial intelligence is quickly becoming a part of our daily lives — whether in the office or the classroom. Tom Hanson reports on one medical school that has become the first in the nation to incorporate AI fully into its doctor training program.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.


Source link

Inside the first U.S. medical school to fully incorporate AI into its doctor training program

Artificial intelligence is quickly becoming a part of our daily lives, whether in the office or the classroom, and one medical school is fully embracing the technology.

The Icahn School of Medicine at Mount Sinai in New York City has become the first in the nation to incorporate AI into its doctor training program, granting access to OpenAI‘s ChatGPT Edu to all of its M.D. and graduate students. Faris Gulamali is among the school’s future doctors taking full advantage of the AI tool.

Gulamali said he uses ChatGPT to help him prep for surgeries and to improve his bedside manner when explaining complex diagnoses to patients.

When asked whether using AI shortened the time it would’ve taken Gulamali had he not used the tool, which is designed to help medical students as they face the rigorous demands required of their education, he said: “It really helped at least reframe the explanation.”

The use of AI in sensitive fields such as medicine has brought up concerns of privacy violations, and OpenAI said it is collaborating with universities and medical schools like Mount Sinai to ensure robust safeguards are in place to protect students and patients. 

ChatGPT Edu is built to be fully compliant with HIPAA, the federal law restricting the release of medical information, according to OpenAI Vice President and General Manager of Education Leah Belsky.

“I think in medicine, and in health in particular, it’s essential that students learn how to use AI and how to use it safely,” she told CBS News. “It helps them to learn faster. It helps them to discover new areas of knowledge. It helps them to explore more deeply. What we’re really focused on is making sure that there is equitable access to AI.”

Belsky equated the impact of AI in the 21st century workplace to that of email and internet access in the 1990s.

For another Ph.D student at Mount Sinai, the AI tool serves as technical support in complex research projects.

“It gives me a pseudo-clinician-style mentor who I can ask questions to at any time of day, as well as a pseudo-software engineering collaborator with whom I can debug problems that I’m having,” Vivek Kanpa told CBS News.

It’s not only the students who say AI is changing the medical field. Dr. Benjamin Glicksberg, an associate professor at Icahn School of Medicine, called it the most remarkable innovation he ever encountered.

“It’s changed everything,” Dr. Glicksberg said. “I think it’s changed how I interact with students. It’s changed how I mentor and even try to innovate myself.”

The professor also said AI tools can be a real time saver, allowing him to be more available to students like Kanpa, who says people should grow with the technology rather than fear it.

“Growing with it as opposed to fearing this thing and holding it in this scary sense of it’s going to replace us, I think is really instrumental,” Kanpa said. 

Source link