Tag Archives: Google

The $196 Billion Revolution: How Agentic A.I. Is Redefining Corporate Power

As traditional systems struggle to keep up, agentic A.I. is redrawing the lines of corporate advantage. Unsplash+

A Dutch insurance company quietly automated 90 percent of its automobile claims processing. A global logistics company revolutionized logistics management with A.I. that thinks three moves ahead. Nvidia’s security systems now detect and neutralize threats before human analysts even spot them. These aren’t experiments—they’re the new reality of business warfare, where the global agentic A.I. market is exploding toward $196.6 billion by 2034, riding a staggering 43.8 percent compound annual growth rate.

As competitors face problems with basic automation, those who have adopted A.I. have systems that plan, decide and work independently. In the next four years, there will be a huge shift in enterprise software; by 2028, 33 percent will feature agentic A.I., up from less than 1 percent in 2024. The companies mastering this technology today will dominate tomorrow’s markets.

The intelligence gap that’s reshaping industries

Forget everything you know about A.I. assistants. With agentic A.I., companies move away from reactive tools and get true business partners instead. They handle everything in real time, finding errors, suggesting resolutions and running complex activities without help.

Two-thirds of executives using agentic A.I. report measurable productivity boosts, with nearly 60 percent achieving significant cost savings. But the true problems occur at a deeper level. According to Futurum Research, agent-based A.I. will drive up to $6 trillion in economic value by 2028, fundamentally rewiring how business gets done.

Real-world transformation in action

The evidence is already mounting across industries:

Financial Services: A.I. agents at JPMorgan Chase keep an eye on customer finances, find signs of fraudulent activity and instantly stop suspicious transactions. The result? Proactive protection that traditional rule-based systems could never match.

Enterprise IT: Jamf’s A.I. assistant “Caspernicus” operates directly in Slack, handling software requests for over 70 percent of employees. Staff no longer wait for engineering support—they get instant help through natural language requests, dramatically improving productivity across all departments.

Logistics and Supply Chain: A leading logistics player manages its logistics using intelligent A.I., looking at ongoing data on transport and inventory to improve deliveries without involving humans.

Cybersecurity: NVIDIA launched Agent Morpheus, an A.I. framework that uses real-time data processing to automatically detect threats and maintain security, moving from reactive to predictive protection.

The economics of autonomous intelligence

The economic implications cannot be overstated. In 2024, the agentic A.I. market in the U.S. reached $769.5 million, and it is predicted to grow at a rate of 43.6 percent per year until 2030. But raw market size tells only part of the story. According to MIT, using agentic A.I. to empower employees can make them 40 percent more efficient, and companies that use A.I. for customer experiences have had sales rise by up to 15 percent. The ROI calculations are compelling: 62 percent of polled executives expect returns above 100 percent from agentic A.I. adoption.

Enterprise leaders are responding with unprecedented investment. According to a SnapLogic survey, 79 percent of IT decision-makers plan to invest over $1 million in A.I. agents over the next year. The clear message: staying ahead in the market now depends on investing in technology.

The multi-agent enterprise: beyond single-point solutions

The next evolution is already emerging: networks of A.I. agents collaborating like digital teams. Consider the following scenario that reflects current deployments in leading companies.

A logistics agent detects a supply chain disruption. It instantly alerts procurement agents to source alternative suppliers while a finance agent rebalances cash flows to reflect the changes. Customer service agents proactively notify clients with updated timelines. No central system orchestrates this—the agents self-organize around business objectives.

Deloitte predicts that in 2025, 25 percent of companies using generative A.I. will launch agentic A.I. pilots, growing to 50 percent in 2027. The technology has moved from concept to deployment faster than any enterprise technology in recent memory.

Platform wars: the new competitive landscape

The competitive dynamics are already crystallizing. Over 400,000 A.I. agents were built using Microsoft’s Copilot Studio in the previous quarter, which over 160,000 organizations have adopted. Salesforce, IBM, Google and Oracle are racing to capture market share with their own platforms.

But the real battlefield isn’t in Silicon Valley—it’s in boardrooms where executives must choose between being disruptors or being disrupted. Eighty-nine percent of surveyed CIOs consider agent-based A.I. a strategic priority, yet 60 percent of DIY initiatives fail to scale past pilot stages due to unclear ROI.

The implementation reality: success factors and pitfalls

Despite the promise, deployment isn’t automatic. Nearly three-quarters of senior leaders believe agentic A.I. could give their company a significant competitive advantage. Still, half say it will make their operating model unrecognizable in just two years.

Most effective implementations move in this organized direction:

Phase 1: Infrastructure Readiness. Exposing enterprise tools and data via APIs, ensuring system interoperability and building monitoring and control frameworks.

Phase 2: Targeted Deployment. Starting with high-impact, data-rich processes prone to coordination bottlenecks such as incident resolution, customer onboarding and claims processing.

Phase 3: Multi-Agent Orchestration. Allowing agents to collaborate across functions, creating peer-to-peer protocols for coordination.

Phase 4: Organizational Redesign. Transitioning to hybrid structures where humans and agents share workflows.

The governance challenge

The autonomy that makes agentic A.I. powerful also creates new risks. Seventy-eight percent of CIOs cite security, compliance and data control as primary barriers to scaling agent-based A.I. Accountability, bias and ethical issues emerge whenever A.I. systems do things by themselves. Leading organizations have been building robust guardrails since day one. IBM watsonx Agents lead governance with enterprise-ready features including role-based controls, compliance auditing and A.I. explainability safeguards.

The disruption timeline: why speed matters

The transformation is accelerating beyond most predictions. By 2029, Gartner predicts 80 percent of common customer service issues will be resolved autonomously, and 15 percent of all day-to-day work decisions will be made by A.I.

Some companies have already benefited from early action. For example, a leading Dutch insurer automated 91 percent of individual automobile claims by integrating custom A.I. agents, enabling adjusters to focus on complex cases requiring human knowledge. Competitors still processing claims manually face an insurmountable cost and speed disadvantage.

Industry-specific disruption patterns

Companies across sectors have different use cases and transformation timelines:

Financial Services: Leading the charge with fraud detection, credit assessment and regulatory compliance automation.

Healthcare: A.I. agents managing appointment scheduling, patient monitoring and treatment personalization are showing early success.

Manufacturing: Predictive maintenance and supply chain optimization are delivering immediate ROI.

Customer Service: In 2024, the customer service and virtual assistants sector led in revenue generation, driven by A.I. agents’ ability to address both straightforward and complicated issues.

The strategic imperative: building the agentic enterprise

The change to agentic A.I. isn’t limited to technology; it becomes a key moment in companies’ competitive plans. Organizations face a binary choice: become agentic enterprises where autonomous A.I. agents work seamlessly alongside humans, or fall behind competitors that do. Half of executives surveyed by PwC believe A.I. agents will make their operating model unrecognizable in just two years. In every field, there will be a major and sudden separation between those who adapt and those who do not.

The organizations that will do well in 2030 will be smarter, able to spot trends, make changes accordingly and look for opportunities without the need for constant human input. They’ll operate at speeds and scales impossible for traditionally-managed competitors.

The bottom line

Agentic A.I. isn’t a technology to deploy—it’s a new way of operating to design. With the global enterprise agentic A.I. market growing at 46.2 percent annually and expected to reach $41.32 billion by 2030, the window for competitive advantage is narrowing rapidly.

The companies that master agentic A.I. in the next 18 months will set the terms for the next decade of business competition. People or businesses that don’t take risks often fade away in the annals of their industry. The changes we want are happening now, not in the future. The only question is whether your organization will lead it or be left behind.



Source link

Sundar Pichai Says Google Will Hire More Human Engineers Because of A.I.

Sundar Pichai is focused on growth rather than downsizing for efficiency CAMILLE COHEN/AFP via Getty Images

At a time when Big Tech companies like Meta, Amazon and Microsoft slash thousands of engineering jobs and replace them with A.I., Google is looking to expand its engineering team, CEO Sundar Pichai said at Bloomberg Tech Summit in San Francisco yesterday (June 4). “I expect we will grow from our current engineering base even into next year. It allows us to do more with the opportunity space,” Pichai told Bloomberg’s Emily Chang during an onstage interview.

Pichai sees A.I. as an opportunity to boost productivity without eliminating human talent. Google is using A.I. to handle repetitive tasks like boilerplate coding, allowing engineers to focus on more impactful work. “I just view this (A.I.) as making engineers dramatically more productive, getting a lot of the mundane aspects out of what they do,” Pichai said. “A.I. serves as an accelerator rather than a replacement for human talent, enabling the company to pursue greater opportunities in emerging technology sectors.”

More than 30 percent of Google’s code is now A.I.-generated, according to Pichai. However, this shift is driving demand for more human engineers to guide, verify and build on what A.I. creates. Pichai’s optimistic outlook contrasts sharply with the industry’s recent mood. Despite rounds of restructuring at Google, Pichai is focused on growth rather than downsizing for efficiency.

“We are definitely investing for the long run in A.I.,” he said, citing a planned $75 billion in capital expenditures for 2025. “The A.I. opportunity is bigger than the opportunity we had in the past.”

Still, Pichai tempered his optimism by acknowledging the current limits of A.I. While systems like Gemini are becoming more powerful and creative, they still make basic errors and aren’t ready for full autonomy. “Even the best models still make basic mistakes,” he said, warning against overestimating current systems. “So are we currently on an absolute path to AGI? I don’t think anyone can say for sure.”

In response to recent predictions by Anthropic CEO Dario Amodei, who suggested A.I. could eliminate half of all entry-level jobs within five years, Pichai pushed back. “We’ve made predictions like that for the last 20 years about technology and automation, and it hasn’t quite played out that way,” he said.

Pichai also pointed to Google’s continued innovation across sectors—such as self-driving technology via Waymo, quantum computing and YouTube’s growth in international markets like India—as evidence that engineering talent remains essential. “These are long-term bets,” he said. “And they all depend on having great people behind them.”

With new products, investments and a commitment to hiring, Google is betting on a future where engineers remain at the heart of innovation, even as A.I. reshapes the landscape around them.



Source link

Reddit’s Treasure Trove of ‘Human’ Data Sparks Tension with A.I. Companies

Steve Huffman co-founded Reddit two decades ago. Frederic J. Brown/AFP via Getty Images

Reddit, the popular social media platform known for its decades of topic-specific forums, holds a treasure trove of user-generated content that A.I. companies can use to train large language models. But the platform doesn’t take kindly to having its data used without permission. In a lawsuit filed yesterday (June 4), Reddit accused A.I. company Anthropic of scraping its site’s content without authorization. Describing Anthropic as a company that “bills itself as the white knight of the A.I. industry,” Reddit’s court filings argued that the startup is “anything but.”

Reddit’s archives, which span two decades of online discussions, make the site an especially valuable resource for human-generated text. This type of content is increasingly sought after by tech companies as their data pools—necessary for training A.I. models—begin to dwindle.

“Reddit’s vast corpus of public content has enormous utility, including as a potential source of inputs for training emerging large language A.I. technologies, like Anthropic’s Claude offering, and assisting A.I. technologies in generating answers to user queries,” said Reddit in the suit.

Reddit accuses Anthropic of using Reddit users’ personal data to train its Claude models without obtaining consent. Reddit claims this violates user agreements that prohibit the commercial exploitation of its content without prior authorization.

While Anthropic claimed in July 2023 that it had blocked Reddit from its web crawlers, Reddit’s audit logs show that the A.I. company accessed its data more than 100,000 times using automated bots in the months that followed. The lawsuit also referenced a 2021 paper co-authored by Anthropic CEO Dario Amodei, which highlighted Reddit’s subreddits as a valuable source of high-quality training data.

“We disagree with Reddit’s claims and will defend ourselves vigorously,” said an Anthropic spokesperson in a statement.

Reddit has formal licensing agreements with some of Anthropic’s competitors, including OpenAI and Google. Reddit executives have previously said the platform is selective when approaching licensing partners, particularly for large-scale training agreements. The company’s vast collection of authentic, unique conversations on “every topic imaginable” has made it a prized asset in the A.I. era, according to CEO Steve Huffman during a quarterly earnings call last year. “The paradox I see is that as more content on the internet is written by machines, there’s an increasing premium on content that comes from real people,” he noted.

On the company’s most recent earnings call last month, Huffman said “authentic content from humans” is Reddit’s primary value proposition.

Co-founded by Huffman and his college roommate Alexis Ohanian in 2005, Reddit has more than 100 million daily active users who use the platform’s subreddits to ask questions, provide tips and share perspectives on various subjects. The company went public last year and currently has a market capitalization of $21.8 billion.



Source link

A.I. Godfather Yoshua Bengio Launches Nonprofit to Counter the Rise of Agentic A.I.

Yoshua Bengio testifies during a hearing before the Privacy, Technology, and the Law Subcommittee of Senate Judiciary Committee on July 25, 2023. Alex Wong/Getty Images

Yoshua Bengio, a pioneering figure in deep learning often referred to as a “Godfather of A.I.,” is shifting his focus from building A.I. to safeguarding against its risks. This week, Bengio announced the launch of LawZero, a nonprofit organization dedicated to A.I. safety research. “This organization has been created in response to evidence that today’s frontier A.I. models have growing dangerous capabilities and behaviors, including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment,” he wrote in a June 3 blog post.

Bengio, who leads Quebec’s Mila AI Institute and teaches at the University of Montreal, is among the most cited computer scientists globally. He shared the 2018 Turing Award—the so-called “Nobel Prize of Computing”—with Geoffrey Hinton and Yann LeCun for their work on neural networks. But by 2023, Bengio had grown increasingly concerned about A.I.’s breakneck progress and its potentially catastrophic risks. LawZero, he says, is a direct response to those concerns.

Proposing a replacement to agentic A.I.

The nonprofit plans to develop an A.I. system designed to regulate agentic tools and identify potentially harmful behaviors. Bengio first outlined this concept in February, when he co-authored a paper advocating for a shift from autonomous “agentic A.I.” to “scientist A.I.”—a model that prioritizes generating reliable explanations over simply optimizing for user satisfaction. In LawZero’s vision, this alternative system would not only serve as a check on agents but also assist in scientific research and eventually help design safer A.I. agents.

The need for such guardrails has grown more urgent, Bengio said, citing recent findings that highlight A.I.’s emerging capacity for self-preservation. A study published in December, for instance, revealed that some advanced models may engage in “scheming” behavior—deliberately hiding their true objectives from humans while pursuing their own goals.

Earlier this year, Anthropic disclosed that a newer version of its Claude model demonstrated the capacity for blackmail when it sensed engineers were attempting to shut it down. “These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies A.I. may pursue if left unchecked,” Bengio warned.

LawZero has reportedly secured about $30 million in funding from donors including Jaan Tallin, a founding engineer of Skype, and Schmidt Sciences, the philanthropic initiative of former Google CEO Eric Schmidt. In addition to Bengio, who will serve as the nonprofit’s president and scientific director, the organization has assembled a 15-person research team.

Bengio emphasized that LawZero was deliberately structured as a nonprofit to shield it from commercial pressures. “This is what the current trajectory of A.I. development feels like: a thrilling yet deeply uncertain ascent into uncharted territory, where the risk of losing control is all too real—but competition between companies and countries drives them to accelerate without sufficient caution,” he said.



Source link

Report: Apple CEO Tim Cook Called Texas Gov. Greg Abbott in Desperate Attempt to Stop Online Child Safety Law

Apple CEO Tim Cook reportedly called Texas Gov. Greg Abbott (R) as part of the tech giant’s efforts to stop the governor from passing legislation in the state that would require app stores to verify the ages of its users.

Last week, Cook called Abbott to ask for changes to the online child safety legislation, or if that failed, the Apple CEO requested a veto, people familiar with the call told the Wall Street Journal.

The sources added that the conversation between the two men was cordial, noting that the purpose of the call was to make it clear to Gov. Abbott that Apple’s desire to see the law stopped goes all the way to the top of the company.

The Texas governor has not yet revealed whether he plans to sign the bill, which has already passed through the state’s legislature with veto-proof majorities.

Notably, Apple deployed lobbyists to pressure lawmakers in the weeks leading up to the legislation passing through the Texas legislature, but those moves were apparently to no avail.

If the bill is signed by Gov. Abbott, it would make Texas the largest state in the nation to implement what is known as an App Store accountability law. So far, Utah is the only state that has passed similar legislation, while at least nine other states have seen the law proposed.

But the legislation being passed in Texas is crucial, as some believe it could set the precedent for more states to follow suit, and thus, possibly create new costs for Apple and Google, tech giants that are currently worth 2.92 trillion and $2.076 trillion, respectively.

The online child safety law would mandate that tech giants housing app stores verify the age of a device owner, so that if the user is a minor, their app store account will be connected to a parent’s account, allowing parents to have more control over what their children are doing on their smartphones.

Alana Mastrangelo is a reporter for Breitbart News. You can follow her on Facebook and X at @ARmastrangelo, and on Instagram.



Source link

Teens’ Google Search History Helped Detectives Solve Horrific Denver Arson Murder Case

A reverse keyword search warrant served to Google helped Denver police identify three teens responsible for an arson attack that killed five members of a family in 2020.

Wired reports that in August 2020, a horrific arson attack in Denver, Colorado, claimed the lives of five members of a Senegalese family, including two children. The case initially left investigators stumped, with little evidence pointing to the perpetrators. However, a breakthrough came when Denver Police Department (DPD) detectives Neil Baker and Ernest Sandoval decided to serve a reverse keyword search warrant to Google, requesting information on users who had searched for the address of the victims’ home in the days leading up to the fire.

The warrant, which was met with initial resistance from Google due to privacy concerns, ultimately revealed that three local teenagers—Kevin Bui, Gavin Seymour, and Dillon Siebert—had repeatedly searched for the address on Google in the two weeks prior to the arson. This information, combined with cell phone location data placing the teens near the scene of the crime, provided the key evidence needed to arrest and charge them.

The case highlights the growing use of reverse keyword search warrants by law enforcement. These warrants allow police to request information on all individuals who searched for specific keywords or phrases, potentially exposing innocent people to unwarranted scrutiny.

In this instance, the teens’ defense argued that the warrant violated their Fourth Amendment rights by conducting a broad “digital dragnet” without individualized probable cause. However, the judge ruled in favor of law enforcement, likening the search to looking for a needle in a haystack.

The Colorado Supreme Court later upheld the constitutionality of the warrant in a landmark ruling, potentially paving the way for wider use of this investigative technique. However, the court also acknowledged the lack of individualized probable cause, deeming the warrant “constitutionally defective” despite allowing the evidence to stand.

Critics argue that reverse keyword search warrants could be used to target individuals based on sensitive personal information, such as searches related to abortion, immigration, or political beliefs. The lack of systemic data on the use of these warrants makes it difficult to assess their full impact on privacy rights.

Ultimately, all three teens accepted plea deals, with Bui receiving the harshest sentence of 60 years in adult prison. While the families of the victims expressed that no punishment could adequately address their loss, the successful prosecution provided some measure of justice.

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Source link

Google Shows Just How Smart Agentic A.I. Has Become at I/O Developers Conference

Google CEO Sundar Pichai addresses the crowd during Google’s annual I/O developers conference in Mountain View, California on May 20, 2025. CAMILLE COHEN/AFP via Getty Images

At Google’s I/O developers conference this week, CEO Sundar Pichai demonstrated Google’s latest progress in agentic A.I., systems that mirror key aspects of human behavior and are capable of carrying out multi-step tasks with minimal prompting. “We think of agents as systems that combine the intelligence of advanced A.I. models with access to tools. They can take actions on your behalf and under your control,” Pichai said onstage.

Agentic features are now rolling out across Google’s product lineup, from Chrome and Search to the Gemini app.

For example, Gemini Live, a new feature within the Gemini app, allows the A.I. assistant to “see” through a user’s phone camera and read on-screen content—building on last year’s Project Astra, a prototype that could respond to spoken questions with visual awareness. Pichai described Gemini Live as “the start of a universal A.I. assistant capable of perceiving the world around you.” During a demo, Gemini Live interpreted real-world objects, answered environment-based questions, and collaborated across apps to help users complete tasks.

Pichai also introduced Project Mariner, an advanced A.I. agent developed by Google DeepMind. Initially released as a research prototype in December, Mariner can now browse and interact with websites autonomously, conducting tasks such as filling out forms and completing transactions.

Project Mariner is behind “Agent Mode,” a new Gemini feature that allows users to delegate tasks like apartment hunting, booking appointments or trip planning. “We are introducing multitasking in Gemini, and it can now oversee up to 10 simultaneous tasks,” Pichai said. “It’s also using a feature called teach and repeat—where you show it a task once, and it learns how to handle similar ones in the future.” For instance, Gemini can find listings on sites like Zillow and apply very specific filters on the user’s behalf.

Agentic A.I. takes Google’s creative suite to another level

Some of the most striking announcements at this year’s I/O conference came from Google’s creative A.I. suite, where agentic intelligence is used to revolutionize content production.

Pichai debuted Veo 3, the latest generative video model capable of producing cinematic-quality clips with synchronized sound and intricate visuals. He also introduced Imagen 4, an upgraded image-generation model, and Flow, an agentic filmmaking assistant that can expand a single clip or prompt into a full scene. Unlike traditional tools that require step-by-step direction, Flow can transform a rough idea into a polished, Hollywood-style production with minimal input.

These advancements, Pichai noted, are powered by Google’s Gemini 2.5 Pro architecture and its next-generation TPU hardware, Ironwood. Built for speed and efficiency, Ironwood enables large-scale, real-time A.I. workloads at reduced cost—an essential requirement for agentic A.I., which must process audio, visual and contextual data continuously to act in real time.

Gemini 2.5 now supports not just the Gemini app and creative tools but also Google Search and its AI Overviews, offering more advanced reasoning and responsiveness. In A.I. mode, users can input queries two to three times longer and more complex than traditional searches.

Pichai also previewed new multi-turn interactions in Search, allowing users to ask follow-up questions in a conversational flow—a feature reminiscent of Perplexity AI.

“This is an exciting moment for A.I.,” Pichai said. “We’re just beginning to see what’s possible when A.I. understands more of the world around you—and can take meaningful action.”



Source link

What is AI Mode, Google’s new artificial intelligence search technology?

Google on Tuesday rolled out AI Mode, its latest artificial intelligence feature designed to provide users with more detailed and tailored responses to questions entered into the search engine.

Unveiled Tuesday at the search giant’s annual Google I/O developer conference, AI Mode comes a year after the company introduced AI Overviews, its first tool to use generative AI technology to enhance its search engine capabilities. 

Sundar Pichai, CEO of both Google and parent company,Alphabet, said in remarks at the conference that AI Mode represents a “total reimagining of Search” and that it would gradually be rolled out to all Google users. 

“What all this progress means is that we are in a new phase of the AI platform shift, where decades of research are now becoming reality for people all over the world,” Pichai told the crowd yesterday at an amphitheater near the company’s  headquarters in Mountain View, California, according to the Associated Press.

video teasing the technology showcases how it works. In response to lengthy, detailed questions typed into Google search, AI Mode displays how many searches Google is performing and how many sites it’s scanning as the technology quickly generates a summarized response at the top of the search platform. It also provides a side bar with links to relevant sites. 

Google said it’s also testing a “Search Live” feature that will enable the search engine to respond to questions based on video content, as well as voice searches, or to questions verbalized by the user, rather than typed. 

As an example, Google’s teaser shows a person recording a video of themselves holding a bridge made of Popsicle sticks while asking the search engine what can be done to strengthen the structure. “To make it stronger, consider adding more triangles to the design,” an automated voice responds.

Google said it will begin feeding its latest AI model, Gemini 2.5, into its search engine starting next week. The company calls Gemini 2.5 its “most intelligent model” yet. 

Rapid advancements

The California-based company began testing AI Mode in March of this year. Google’s latest AI search tool builds on AI Overviews, which was introduced in the U.S. in May 2024 and has 1.5 billion users, according to an article on the Google website.

Some argue that AI Overviews, which provides an AI-generated summary of information online, at times eliminating the need to click directly on source links for further information, has undercut traffic to their sites. According to a study by Ahrefs, AI Overviews led to a 35% lower average click-through rates for top-ranking pages on search engine results pages.

Concerns over accuracy

“By making AI Mode a core part of the experience, Google is betting it can cater to the demand for AI without alienating its massive base,” Gadjo Sevilla, a senior analyst for research firm eMarketer, wrote in a blog post. “But there are risks for hallucinations and factual errors which could drive users towards competitors,” he added.

Such factual errors were spotted with AI Overviews soon after its release, prompting Google to admit in a statement at the time that the technology produced “some odd and erroneous overviews.” In one instance, AI Overviews suggested that users add glue to pizza or eat at least one small rock a day, according to the MIT Technology Review.

As for AI Mode, Google has indicated that its new AI search tech is performing well and serving its intended purpose. 

“We conduct quantitative research and collect in-product feedback to ask users whether they’re satisfied with their results. And we’ve seen that introducing AI Overviews on Search leads to an increase in satisfaction and reported helpfulness,” a Google spokesperson told CBS MoneyWatch.

Source link

British Billionaire Jeremy Coller Funds A.I. Race to Understand Dolphin Language

The winner of this year’s Coller Dolittle prize will use the funds to study bottlenose dolphins. Wild Horizons/Universal Images Group via Getty Images

The beloved Doctor Dolittle children’s books and films center around a physician who has the ability to converse with animals—a plot line that likely feels fantastical to most. But Jeremy Coller, a British billionaire financier, believes such a feat could realistically be achieved in the next five years. That’s why he launched the Coller Dolittle Challenge for Interspecies Two-Way Communication, an annual award supporting researchers developing technology to communicate with animals.

Coller has pledged a total of $10 million to whoever ultimately “cracks the code of interspecies communication.” For now, the British investor is awarding $100,000 each year to researchers who showcase non-invasive ways to understand animal communication. The inaugural recipient of the prize is Laela Sayigh, who, alongside other researchers at the Woods Hole Oceanographic Institution in Woods Hole, Mass., is using A.I. to decipher dolphin whistles.

“I’m honestly sort of speechless,” said Sayigh during an event today (May 15) that unveiled the prize’s winner. “I appreciate your support of our work, and I also appreciate the recognition of the value of long-term datasets.” Other finalists included researchers from Germany, Israel and France who proposed projects researching communication with nightingales, marmoset monkeys and cuttlefish.

Sayigh’s research is conducted in collaboration with the Sarasota Dolphin Research Program, which has documented six generations of bottlenose dolphins in Florida’s Sarasota Bay since 1970. The program has amassed a vast database of recorded dolphin whistles—about half are “name-like signature whistles” used similarly to human names, while the rest are non-signature whistles with still-uncertain meanings.

Using A.I. and machine learning, Sayigh’s team aims to further classify these whistles, deepen their understanding of dolphin vocabulary, and explore how different types of whistles function. Given the striking similarities between human and dolphin communication, including the use of signature whistles and “baby talk” to speak to calves, Sayigh believes dolphins are an ideal subject for the Coller Dolittle Challenge. “These parallels could enable us to build bridges between our communication systems,” she said.

What is the Coller Dolittle Challenge?

Coller is the co-founder of Coller Capital, a London-based fund that buys private equity fund assets, and has an estimated net worth of $4 billion, according to Bloomberg. He’s also the chairman of the Jeremy Coller Foundation, a philanthropic vehicle that partnered with Tel Aviv University last year to launch the Coller Dolittle Challenge.

In addition to giving annual $100,000 awards for significant contributions to understanding non-human communication, the prize will eventually grant either a $10 million equity investment or $500,000 in cash to a team that successfully achieves interspecies communication. The criteria for this grand prize are still being finalized and are expected to be announced in the coming years.

The Coller Dolittle Challenge has been described as a reimagining of the Turing test, which measures whether a computer can communicate in a way that is indistinguishable from a human. Coller’s version flips the script: the aim is to communicate with animals so effectively that they don’t realize they’re interacting with humans. He believes A.I. will be the key to reaching this milestone—much like the Rosetta Stone in the 18th century enabled humans to decode Egyptian hieroglyphics.

“We’ve heard a lot from successful business leaders who think the future lies in the the stars, exploring space, setting their sights on Mars even—and that’s great,” said Coller during today’s event. “But in pursuing those lofty goals, we shouldn’t lose sight of the fact there’s still so much we don’t understand here on Earth.”



Source link

Building a $49B Design Powerhouse: Interview with Canva Co-Founder Cameron Adams

From left to right: Cliff Obrecht, Cameron Adams and Melanie Perkins. Courtesy of Canva

When Cameron Adams co-founded Canva, the popular graphic design platform now best known for its A.I. image generator, with Melanie Perkins and Cliff Obrecht in 2012, they were chasing a vision to make design simple and accessible for everyone. Today, that vision powers a $49 billion company with more than 180 million users. 

Adams, originally from Australia, studied law and science in college and worked in graphic design and tech for a decade before starting Canva. “The design world was incredibly fragmented back then—you had to go to a stock photo library for images, a layout library for templates, download fonts separately, then somehow pull it all together in complex professional software. It was an incredibly tedious process,” he told Observer. “We wanted to take design from this intimidating thing that only 1 percent of the world could do and make it accessible to the other 99 per cent who don’t have design training.”

Canva’s drag-and-drop functionalities and vast library of customizable templates make it easy for users to create everything from social media graphics to business presentations. In 2023, the company launched Magic Studio, a suite of A.I.-powered design tools that includes Magic Design—which generates polished, on-brand templates from a simple prompt—alongside tools like Magic Write, Magic Animate and Magic Edit, which help users draft, customize and animate content in just a few clicks.

This intuitive yet robust design experience has fueled Canva’s broad adoption, including use by 95 percent of Fortune 500 companies, according to Canva.

Adams, who serves as the company’s chief product officer, attributes Canva’s success to its focus on end users—something he learned from his time at Google. (Adams worked at Google’s Australia office from 2007 to 2011, collaborating with brothers Lars and Jens Rasmussen—the creators of Google Maps—on a now defunct project called Google Wave, which combins email, instant messaging and document-sharing into live, editable threads called “waves.”)

“One thing I took away from was the importance of thinking user-first. Technology and user experience should be held in tandem from the get-go, rather than thinking of UX as something you tack on at the end to make a tool look better,” Adams said.

He emphasized integrating user feedback early in the product development process to keep the product on track. “We’ve been guided by our users’ feedback and needs at every step,” he said. “It’s about balance—go too early and the signals you get from a low-quality product won’t help you, but go too late and you’ve spent way too much time heading in the wrong direction.”

Canva pivots to no-code coding

Canva’s latest pivot is into software development via A.I., marked by the launch of Canva Code at its annual conference last month in Los Angeles. Part of the company’s Visual Suite 2.0, the new feature enables users to turn simple prompts and design concepts into web apps, forms and tools.

“Canva Code whips up HTML, CSS, and JavaScript to make it happen…With just a prompt, users can see the code come to life and make tweaks if they want,” Adams explained, noting that it’s like “spreadsheets that can power photo editing at scale.”

“It’s our next move in helping everyone realize their ideas, regardless of how much technical training they have,” he added. “We’re building tools for a future of work that is more interactive, visual and fluid, and we’ve seen demand for this increase rapidly.”

Canva Code is the latest tool introduced by the company that blurs the lines between design and software development. Similar recent launches include Canva Dev MCP Server, which lets developers connect their favorite A.I. coding tools directly into Canva’s App Marketplace, and Canva Sheets, which integrates with Magic Studio to generate thousands of social posts from custom datasets.

Canva Code is already gaining traction in both classrooms and enterprises, said Robert Kawalsky, global head of product. Teachers are using it to introduce logic and problem-solving through interactive quizzes and matching games, while enterprise teams are prototyping internal tools—all without leaving Canva’s interface.

“What sets us apart is the integration within the Canva ecosystem,” Kawalsky told Observer. “Unlike standalone tools, Canva Code allows users to transform static content into interactive experiences—then immediately incorporate those into designs using our template library, brand kits, and collaborative features. This extra layer of customization goes beyond what most no-code platforms offer.”



Source link