Tag Archives: AI

Experts offer advice to new college grads on entering the workforce in the age of AI

New college graduates this year face an especially daunting task — putting their degrees to work just as “generative” artificial intelligence technology like ChatGPT is beginning to change the American workplace. 

“We are entering an entirely new economy, so the knowledge economy that we have been in for the last 50 years or so is on the way out, and a new economy is on the way in,” Aneesh Raman, Chief Economic Opportunity Officer at LinkedIn, told CBS MoneyWatch.

The impact of AI on Americans recently out of college is already visible across a range of industries and jobs, from technology and finance to media, legal fields and market research. As a result, for the first time unemployment among fresh grads recently surpassed the nation’s overall jobless rate — a shift some experts attribute in part to the creeping influence of AI.

“There are signs that entry-level positions are being displaced by artificial intelligence at higher rates than the roles above them,” said Matthew Martin, senior U.S. economist at Oxford Economics.

With the adoption of AI at work only expected to accelerate, we asked three experts across academia, recruitment and consulting for advice on how new college grads should navigate this new normal. Here’s what they said.

Become fluent in AI 

Perhaps most important, young job-seekers start using gen-AI tools — today. 

“Almost anybody in that audience, irrespective of the job that they’re pursuing, will be expected to use AI with some facility right away,” said Joseph Fuller, a professor at Harvard Business School and founder of the Managing the Future of Work project, comparing the task to learning how to use Microsoft Office for a previous generation of grads.

To get the ball rolling, experts encourage those who are starting to hunt for work to familiarize themselves with the array of tools at their disposal, such as Anthropic’s Claude or OpenAI’s ChatGPT. That means learning how to engage with such tools beyond simply using them as a search engine. 

“You want to get in a dialogue with it,” Fuller said. “You want to ask it to take different perspectives.”

Emily Rose McRae, an analyst at research and advisory firm Gartner, said learning how to use AI apps can also be a good way to develop transferable skills. For example, asking AI to summarize documents and then validating its findings to ensure accuracy. 

Meanwhile, although AI can be helpful when it comes time to filling out job applications, users should proceed with caution given that recruiters can often spot AI-generated language, experts note. Nearly two-thirds of job candidates today use AI at some point in the application process, according to a report from recruitment firm Career Group Companies.

“If you’re using it to write your cover letter and your resume and you did not review it, everyone can tell,” McRae said.

Another way to gain potentially valuable experience with AI, while also seeking work, is for interview practice. For example, users can ask the chatbot both to provide sample questions they might face in an interview and then rate the quality of their responses. 

“If you are using it as a tool to get your own understanding of self in interviews, you’re going to start being leaps ahead of everyone else,” Raman said.

Hone your soft skills

Experts say that as AI surpasses humans in executing certain tasks — think actuarial math or corporate compliance, for example —more attention will shift to job candidates’ so-called soft skills, such as problem solving and communication. 

“You cannot outsource your thinking to AI,” LinkedIn’s Raman said. “You have to continue to hone critical thinking and complex strategy thinking.”

The focus will be less on your pedigree — where you went to school or even whether you have a college degree — he added, and more on what he calls the “5 Cs”: curiosity, compassion, creativity, courage and communication. 

To improve their soft skills, Fuller encourages entry-level job candidates to work on turning what they regard as their biggest weakness into a strength. For instance, if you typically shy away from public speaking or talking in groups, push yourself to get comfortable in those situations. 

“The inability to do that is going to be penalized more severely in the work of the future than it has been in the past,” he said.

The Harvard professor also suggested highlighting examples of advanced social skills directly on your resume to help paint a picture for recruiters of how you can contribute to the workplace.

Choose your employer wisely

Beyond skills development, experts say college grads should be thoughtful about the type of company they choose to work at, knowing that AI could drastically alter the business in the coming years.

“The most important thing, if you’re a new grad, is where you work — not what you do at the place you’re going to work,” Raman told CBS MoneyWatch.

He encouraged college graduates to seek out employers that are integrating AI responsibly and with respect for their workforce — as opposed to embracing it chiefly to replace people. Companies that are adapting to what is a major technological shift in real time will typically offer the best opportunities for learning and growth, Fuller said. 

In evaluating a prospective employer, young job candidates should try to gain an understanding of how they fit into the company’s future. For example, McRae recommends asking hiring managers up front what types of investments the organization is making in its employees and what the room for growth looks like. 

“What are they telling me they care about? What do career paths look like for this role like now? How do you help people develop the skills they need to become experts?” she said.

In researching companies, McRae also encouraged recent college grads to look for places that offer apprenticeship or rotational programs, which can offer ways to quickly ramp up their knowledge base, especially if traditional entry-level roles are in short supply.

Source link

Meta’s platforms showed hundreds of “nudify” deepfake ads, CBS News investigation finds

Meta has removed a number of ads promoting “nudify” apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms.

“We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps,” a Meta spokesperson told CBS News in an emailed statement. 

CBS News uncovered dozens of those ads on Meta’s Instagram platform, in its “Stories” feature, promoting AI tools that, in many cases, advertised the ability to “upload a photo” and “see anyone naked.” Other ads in Instagram’s Stories promoted the ability to upload and manipulate videos of real people. One promotional ad even read “how is this filter even allowed?” as text underneath an example of a nude deepfake

One ad promoted its AI product by using highly sexualized, underwear-clad deepfake images of actors Scarlett Johansson and Anne Hathaway. Some of the ads ads’ URL links redirect to websites that promote the ability to animate real people’s images and get them to perform sex acts. And some of the applications charged users between $20 and $80 to access these “exclusive” and “advance” features. In other cases, an ad’s URL redirected users to Apple’s app store, where “nudify” apps were available to download.

Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people.

An analysis of the advertisements in Meta’s ad library found that there were, at a minimum, hundreds of these ads available across the company’s social media platforms, including on Facebook, Instagram, Threads, the Facebook Messenger application and Meta Audience Network — a platform that allows Meta advertisers to reach users on mobile apps and websites that partner with the company. 

According to Meta’s own Ad Library data, many of these ads were specifically targeted at men between the ages of 18 and 65, and were active in the United States, European Union and United Kingdom. 

A Meta spokesperson told CBS News the spread of this sort of AI-generated content is an ongoing problem and they are facing increasingly sophisticated challenges in trying to combat it.

“The people behind these exploitative apps constantly evolve their tactics to evade detection, so we’re continuously working to strengthen our enforcement,” a Meta spokesperson said. 

CBS News found that ads for “nudify” deepfake tools were still available on the company’s Instagram platform even after Meta had removed those initially flagged.

Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people.

Deepfakes are manipulated images, audio recordings, or videos of real people that have been altered with artificial intelligence to misrepresent someone as saying or doing something that the person did not actually say or do. 

Last month, President Trump signed into law the bipartisan “Take It Down Act,” which, among other things, requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim. 

Although the law makes it illegal to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content. 

Those tools do violate platform safety and moderation rules implemented by both Apple and Meta on their respective platforms.

Meta’s advertising standards policy says, “ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive.”

Under Meta’s “bullying and harassment” policy, the company also prohibits “derogatory sexualized photoshop or drawings” on its platforms. The company says its regulations are intended to block users from sharing or threatening to share nonconsensual intimate imagery.

Apple’s guidelines for its app store explicitly state that “content that is offensive, insensitive,  upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy” is banned.

Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University’s tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year. He told CBS News in a phone interview on Tuesday that he’d seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period. 

Although Telegram and X have what he described as a structural “lawlessness” that allows for this sort of content, he believes Meta’s leadership lacks the will to address the issue, despite having content moderators in place. 

“I do think that trust and safety teams at these companies care. I don’t think, frankly, that they care at the very top of the company in Meta’s case,” he said. “They’re clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don’t have Meta money to throw at it.” 

Mantzarlis also said that he found in his research that “nudify” deepfake generators are available to download on both Apple’s app store and Google’s Play store, expressing frustration with these massive platforms’ inability to enforce such content. 

“The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don’t necessarily have the wherewithal to ban them,” he said.

“There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, ‘All right, I don’t care what you present yourself as on my platform, you’re gone,'” Mantzarlis added. 

CBS News has reached out to both Apple and Google for comment as to how they moderate their respective platforms. Neither company had responded by the time of writing. 

Major tech companies’ promotion of such apps raises serious questions about both user consent and about online safety for minors. A CBS News analysis of one “nudify” website promoted on Instagram showed that the site did not prompt any form of age verification prior to a user uploading a photo to generate a deepfake image. 

Such issues are widespread. In December, CBS News’ 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. 

Despite visitors being told that they must be 18 or older to use the site, and that “processing of minors is impossible,” 60 Minutes was able to immediately gain access to uploading photos once the user clicked “accept” on the age warning prompt, with no other age verification necessary.

Data also shows that a high percentage of underage teenagers have interacted with deepfake content. A March 2025 study conducted by the children’s protection nonprofit Thorn showed that among teens, 41% said they had heard of the term “deepfake nudes,” while 10% reported personally knowing someone who had had deepfake nude imagery created of them.

Source link

Colleges try to tackle A.I. in the classroom



Colleges try to tackle A.I. in the classroom – CBS News










































Watch CBS News



Some colleges are turning to classic tactics to try to keep A.I. out of the classroom. Sales of lined composition test books — known as “blue books” — which students used to use to handwrite essays and answers on exams, are on the rise, the Wall Street Journal reported. Here’s how schools are trying to tackle the exploding use of A.I.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.


Source link

What to know about AI assistants and how they’re changing personal productivity



What to know about AI assistants and how they’re changing personal productivity – CBS News










































Watch CBS News



Micro Center News editor Dan Ackerman joins “CBS Mornings Plus” to share what happened when he put Google’s Gemini 2.5 Pro to the test by creating his own AI “assistbot” to manage real tasks like scheduling, vacation planning and staying organized.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.


Source link

What is AI Mode, Google’s new artificial intelligence search technology?

Google on Tuesday rolled out AI Mode, its latest artificial intelligence feature designed to provide users with more detailed and tailored responses to questions entered into the search engine.

Unveiled Tuesday at the search giant’s annual Google I/O developer conference, AI Mode comes a year after the company introduced AI Overviews, its first tool to use generative AI technology to enhance its search engine capabilities. 

Sundar Pichai, CEO of both Google and parent company,Alphabet, said in remarks at the conference that AI Mode represents a “total reimagining of Search” and that it would gradually be rolled out to all Google users. 

“What all this progress means is that we are in a new phase of the AI platform shift, where decades of research are now becoming reality for people all over the world,” Pichai told the crowd yesterday at an amphitheater near the company’s  headquarters in Mountain View, California, according to the Associated Press.

video teasing the technology showcases how it works. In response to lengthy, detailed questions typed into Google search, AI Mode displays how many searches Google is performing and how many sites it’s scanning as the technology quickly generates a summarized response at the top of the search platform. It also provides a side bar with links to relevant sites. 

Google said it’s also testing a “Search Live” feature that will enable the search engine to respond to questions based on video content, as well as voice searches, or to questions verbalized by the user, rather than typed. 

As an example, Google’s teaser shows a person recording a video of themselves holding a bridge made of Popsicle sticks while asking the search engine what can be done to strengthen the structure. “To make it stronger, consider adding more triangles to the design,” an automated voice responds.

Google said it will begin feeding its latest AI model, Gemini 2.5, into its search engine starting next week. The company calls Gemini 2.5 its “most intelligent model” yet. 

Rapid advancements

The California-based company began testing AI Mode in March of this year. Google’s latest AI search tool builds on AI Overviews, which was introduced in the U.S. in May 2024 and has 1.5 billion users, according to an article on the Google website.

Some argue that AI Overviews, which provides an AI-generated summary of information online, at times eliminating the need to click directly on source links for further information, has undercut traffic to their sites. According to a study by Ahrefs, AI Overviews led to a 35% lower average click-through rates for top-ranking pages on search engine results pages.

Concerns over accuracy

“By making AI Mode a core part of the experience, Google is betting it can cater to the demand for AI without alienating its massive base,” Gadjo Sevilla, a senior analyst for research firm eMarketer, wrote in a blog post. “But there are risks for hallucinations and factual errors which could drive users towards competitors,” he added.

Such factual errors were spotted with AI Overviews soon after its release, prompting Google to admit in a statement at the time that the technology produced “some odd and erroneous overviews.” In one instance, AI Overviews suggested that users add glue to pizza or eat at least one small rock a day, according to the MIT Technology Review.

As for AI Mode, Google has indicated that its new AI search tech is performing well and serving its intended purpose. 

“We conduct quantitative research and collect in-product feedback to ask users whether they’re satisfied with their results. And we’ve seen that introducing AI Overviews on Search leads to an increase in satisfaction and reported helpfulness,” a Google spokesperson told CBS MoneyWatch.

Source link

Extended interview: Bill Gates on AI, Trump’s aid cuts, the closing of his foundation and more



Extended interview: Bill Gates on AI, Trump’s aid cuts, the closing of his foundation and more – CBS News










































Watch CBS News



In a wide-ranging, exclusive interview with “CBS Mornings” co-host Tony Dokoupil, Bill Gates opens up about the end of his career, the future of artificial intelligence, the eventual closing of his foundation, President Trump and more.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.


Source link

AI is making online shopping scams harder to spot

Online scams are nothing new, but artificial intelligence is making it easier than ever to dupe people.

What used to take days now takes a scammer only minutes to create.

A new report from Microsoft highlights the scale of the problem. The company says it took down almost 500 malicious web domains last year and stopped approximately 1.6 million bot signup attempts every hour.

“Last year we were tracking 300 unique nation-state and financial crime groups. This year, we’re tracking 1,500,” Vasu Jakkal, corporate vice president of Microsoft Security told CBS News Confirmed.

The company attributes much of the rise in this type of crime to generative AI which has streamlined the process to make a website.

“You can just buy a kit off the web,” Jakkal explained. “It’s an assembly line. Someone builds the malware. Someone builds the infrastructure. Someone hosts the website.”

Jakkal explained that AI isn’t just helping set up fraudulent sites, it also helps make them more believable. She said scammers use generative AI to create product descriptions, images, reviews and even influencer videos as part of a social engineering strategy to dupe shoppers into believing they’re scrolling through a legitimate business, when in reality they’re being lured into a digital trap.

Another tactic outlined in Microsoft’s report is domain impersonation. Jakkal said scammers make a near-perfect copy of a legitimate website’s address, sometimes changing just a single letter, to trick consumers into giving up money and information.

As well as raising awareness of these scams, the company is introducing new tools to help safeguard their customers. Microsoft’s web browser, Microsoft Edge, now features typo and domain impersonation protection which prompts users to check the website’s URL if the program suspects there may be a misspelling. The browser also uses machine learning to block potentially malicious sites before consumers reach the homepage.

“We’re trying to combat at every place where we see there’s a potential of someone being vulnerable to a fraud attempt,” Jakkal said. The idea is to put checks and balances in place so people are able to pause and reevaluate, he said.

Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, commended Microsoft for being one of the most proactive companies in fraud prevention, but said more action needed to come from both the private and public sector.

“Having the backing of big tech as part of this kind of public, private partnership would be a really great way to show that they do take it seriously.”

No matter where you’re browsing, CBS News Confirmed compiled some tips to spot sham sites.

Tips to stay safe while shopping online

  1. Be wary of impulse buying: Scammers will try to use pressure tactics like “limited-time” deals and countdown timers to get you to shop fast. Take a moment to pause and make sure the site you’re on is the real deal.
  2. Check for typos in the URL: Some scam sites will try to mimic real companies. But since they don’t own the domain, it’s common to see a url that’s just slightly off from what you’d expect.
  3. Don’t rely on social media links: If you’re going from an app to a shopping site, close out of the page that opens automatically and try to find it independently on a web browser. 
  4. Check the reviews: Fraudulent sites will use fake reviews to make the products seem real. Watch out for similar phrases or wording in the reviews, or an overwhelming number of five-star reviews.
  5. Use a credit card: This allows you to dispute the payment or claim fraud if it turns out the deal really was too good to be true.

Source link

How AI is using facial recognition to help bring lost pets home

A new artificial intelligence-based technology is helping thousands of pet owners reunite with their lost animals, addressing a persistent problem that affects millions of American families each year.

The national database called Love Lost, operated by the nonprofit Petco Love, has already helped reconnect 100,000 owners with their lost pets since its launch in 2021.

“In the sheltering system, it’s about 20 percent of lost pets will be reunited, which is simply not enough,” said Susanne Kogut, president of Petco Love.

Michael Bown experienced this firsthand when his pitbull-mix, Millie, escaped during a walk in lower Manhattan after slipping out of her collar.

“Because she’s a rescue dog, she’s very anxious,” Bown said. “The only thing I was thinking is, she’s trying to find me, and she doesn’t know where I am.”

While Bown rushed home to search, his mother uploaded Millie’s photo to the Love Lost database. Within 14 hours, they received a call that changed everything.

Millie had run 10 miles north to Harlem, where she was struck by a car before being transported an additional 15 miles to a veterinary hospital in Paramus, New Jersey. The hospital had also uploaded Millie’s picture to the same free platform, which is run by donations.

How pet tracker technology works

The technology works by identifying unique features of each animal — from eye shape and whisker length to unusual markings and tail curvature. The AI system collects up to 512 data points per pet, using machine learning to search for matching animals.

A key advantage of the system is its ability to recognize pets even when their appearance changes dramatically after getting lost. The database also pulls lost pet reports from social media posts to increase the odds of a successful match.

“People used to put flyers on a telephone pole. Now we have that one virtual telephone pole in the system,” Kogut said. “Everyone put the flyers there and we’re going to send these pets home!”

More than 3,000 animal shelters nationwide now participate in the program, which is funded entirely by donations. The nonprofit advises pet owners to be cautious of potential scams when using the service. They recommend being wary of anyone requesting money to return a lost pet and suggest limiting communications to the site’s secure platform.

Two months after her ordeal and while recovering from a broken leg, Millie is back with Bown and adjusting well to life at home.

“She likes to say ‘hi’ to every single dog that we see on the walk, regardless of if they want to say ‘hi’ to her,” Bown said. “But she’s doing really well.”

Source link

AI pioneer Geoffrey Hinton says world is not prepared for what’s coming



AI pioneer Geoffrey Hinton says world is not prepared for what’s coming – CBS News










































Watch CBS News



Geoffrey Hinton, whose work shaped modern artificial intelligence, says companies are moving too fast without enough focus on safety. Brook Silva-Braga introduced us to Hinton in 2023 and recently caught up with him.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.


Source link