Max Lytvyn, co-founder of Grammarly, speaking at Web Summit Vancouver 2025 on May 29. Vaughn Ridley/Web Summit via Sportsfile via Getty Images
Grammarly launched 16 years ago as a grammar-checking tool, but its founders always had more ambitious plans. One of them, Max Lytvyn, previously ran a plagiarism detection startup where he saw how often people struggled not with honesty, but with the sheer difficulty of writing clearly. With Grammarly, he set out to make the process of translating thoughts into words less intimidating. “Technology just wasn’t there to make it possible,” Lytvyn told Observer at the Web Summit in Vancouver this week.
That’s no longer the case. What began as a tool for fixing grammatical errors has evolved into a sophisticated A.I. platform helping users communicate more effectively across emails, documents, and messaging apps. Now, with a new $1 billion funding round led by General Catalyst—announced yesterday (May 29)—the San Francisco-based company is preparing to scale its A.I. capabilities even further.
The new funding will support Grammarly’s expansion across sales, marketing and acquisitions. “Moving fast means building fast, expanding the market fast, and potentially acquiring other companies to accelerate our progress,” said Lytvyn, who serves as the company’s head of revenue. Last valued at $13 billion in 2021, the company now reports more than $700 million in annual revenue and serves 40 million daily users.
The financing follows Grammarly’s acquisition of productivity startup Coda six months ago, a move that brought Coda CEO Shishir Mehrotra onboard as Grammarly’s new CEO. The leadership change is part of a broader push to grow Grammarly’s A.I. capabilities.
“The emerging technology has been an accelerant—we can do way more now,” said Lytvyn, noting that Grammarly’s suite of A.I. agents will expand beyond grammar, plagiarism and summarization to include workplace tools like fact-checkers and systems that retrieve data from customer relationship management platforms.
The rise of generative A.I. has also brought a roster of newfound rivals. “Some of the things that only we could do, now anybody could do. That’s fine, that’s the nature of technology,” said Lytvyn. He noted that Grammarly still retains the benefit of scale and ubiquitous integration across applications.
Will higher education embrace A.I.?
“The educational system has to teach to use A.I. effectively, rather than ban it,” said Lytvyn, who noted that students will need A.I. skills after they graduate and enter the workforce.
To support academic institutions navigating the challenges of A.I., Grammarly has rolled out tools like Authorship—a feature that identifies which parts of a document are original, A.I.-generated or copied from other sources. The tool echoes Lytvyn’s early work in plagiarism detection. “It’s almost like a next iteration of plagiarism detection,” he said.
But Grammarly’s user base now extends far beyond the classroom. “It’s from 6th graders all the way to professionals in every field,” said Lytvyn, who noted the company plans to eventually launch hundreds of specialized A.I. agents to support various communication needs. With the technology finally in place to realize Grammarly’s original vision, speed is now the priority. “A.I. accelerated everything, and to stay on top of this, we need to move fast,” he said.
Gary Marcus speaking at Web Summit Vancouver 2025 on May 27. Don Mackinnon/AFP via Getty Images
For someone who’s spent nearly his entire life working in artificial intelligence, Gary Marcus—researcher, author and cognitive scientist—isn’t exactly a cheerleader for the technology. In fact, he’s one of generative A.I.’s most outspoken skeptics, frequently questioning both its current capabilities and its potential to cause harm. “After 40 years of doing this, I’ve never gotten over the sense that we’re not really doing it right,” Marcus told Observer.
That hasn’t stopped him from engaging deeply with the field. Marcus studied under psychologist Steven Pinker, founded two machine learning startups and briefly led A.I. initiatives at Uber. A professor emeritus at New York University, he even graduated high school early after building a Latin-to-English translator.
At the crux of his disillusionment is the field’s current obsession with large language models (LLMs) and the neural networks that power them. Marcus argues these systems are fundamentally limited and should be supplemented with symbolic A.I.—a more traditional, logic-based approach rooted in rules and reasoning. “I think we’re very early in the history of A.I.,” said Marcus, who has also called for regulation to address the technology’s emerging risks.
Observer caught up with the prominent skeptic at Web Summit Vancouver this week to discuss all things A.I. The following conversation has been edited for length and clarity.
Observer: How did you first become aware of and interested in A.I.?
Gary Marcus: I was ten. It’s a very specific moment. I learned how to program on basically a simulation of a computer that was on paper. It was part of my after-school program for gifted kids, and that night I explained to the media how it worked. So, my media career and my A.I. career began that day.
How did you tell the media?
It was a show called Ray’s Way, which I didn’t know too much about, but they thought it’d be fun to have a human interest story. They nominated me to do the explanation, you know, cute kid explaining what a computer is. And I saw it on TV that night. There’s no archival footage that we can find, but I did actually see it on my father’s little black and white TV later that night.
When did you feel the most optimistic about A.I.?
I’ve never been fully optimistic about it. I don’t think we’ve made as much progress as I’d hoped that we would. I would have guessed that 40 years after I was looking at this, that a lot of A.I. would have been solved problems. We have a lot of tools now that are very useful. But I think the bigger questions of, like, how do you represent and acquire knowledge? You still don’t have really good answers to those. So I’ve never reached a point of being really satisfied. I always want it to go better.
After working in a field for so long that didn’t receive recognition for years and years, how did you perceive the ChatGPT moment with A.I. breaking into the mainstream?
I started writing about it right away in my Substack and I said immediately that it’s going to excite people but it has limits. I wrote an essay in December of 2022, right after ChatGPT came out, on Christmas Day called What to Expect When You’re Expecting… GPT-4,and I made a series of predictions based on my understanding of how these things work and all the work I’ve done before. I said everybody’s going to be really excited at first, but they’re going to become disillusioned. It’s going to hallucinate, it’s going to make dumb errors.
You go back, and everything that I said was true—not only of GPT-4, but of every model that’s come since. They all still hallucinate. They all still make stupid errors. You still can’t trust them. They’re all kind of useful for misinformation, which is something I said.
You’ve suggested that large language models (LLMs) aren’t the right way to pursue A.I. What would be the ideal form of pursuing it, and is anyone doing that?
I wrote a paper in 2020 called The Next Decade in A.I. that described what I thought we should do, what I still think we should do. It starts with neuro-symbolic A.I., which combines neural networks with classical A.I. approaches. There’s been a kind of historical division between the field, we need a reconciliation.
AlphaFold is actually an example of neuro-symbolic A.I. We still don’t have a systematic way to do it, that’s just a start—that’s a stepping stone that we need to do first. We also need to have ways for systems to build cognitive models of the world as they see things happen.
When do you think artificial general intelligence (A.G.I.) will be achieved?
I would say definitely not this decade. Maybe the next decade and maybe longer. Science is hard. We don’t know the answers that long before we get there. You can think of a historical analogy: in the early 20th century, everybody wanted to figure out the molecular basis of genes and they all pursued the wrong hypothesis. They all thought the genes were some kind of protein, and they looked at lots of proteins and they were all wrong. And then it was only in the late ’40s, when Oswald Avery did the right experiment to rule out proteins, that things moved.
It was only a few years later that [James] Watson and [Francis] Crick, with Rosalind Franklin’s data, figured out structured DNA. Then we really were off to the races. It was like 30 years wandering the desert. When people have a bad idea, It’s hard to predict when they will move past that bad idea.
It’s completely fallen apart. It’s totally depressing. I spoke to the Senate in May of 2023 and we had strong bipartisan support and the support of Sam Altman to have serious A.I. regulation, and nothing happened. Altman has changed his position. The Senate has become much less friendly. It is a huge step back and things don’t look good right now. I think bad things will happen as a function of A.I.—cybercrime, discrimination in employment, etc. etc. There’s very little protection for U.S. citizens right now.
You’ve previously proposed a regulatory process for A.I. that would operate similarly to how the U.S. Food and Drug Administration (FDA) approves drugs. Is that still a solution you support?
I still absolutely think we should have it. And I think that even more so because we’ve seen things like some of the most recent models maybe can help people make biological weapons. Is that okay, is it worth the gain? Society should have a vote, the government should have a vote, on whether you release something that is going to materially increase the chance of some serious harm in society. It’s insane that the companies can do what they want without any any checks and balances.
Best thing that ever happened to China. I mean, it’s really bad for the United States. We have historically, for many years, been basically the leader in science, and we’re just destroying that for no good reason.
Peter Pernot-Day heads up strategic and corporate affairs at Shein. Photo By Piaras Ó Mídheach/Sportsfile for Collision via Getty Images
Ever wonder how Shein, the budget-friendly fast fashion giant, manages to keep pace with trends seemingly overnight? According to Peter Pernot-Day, the company’s head of strategic and corporate affairs, it all comes down to Shein’s “micro-production” model—a system that allows for lightning-fast turnaround based on real-time demand.
“We are precisely tailoring the supply of products to the actual demand in the marketplace,” said Pernot-Day while speaking at Web Summit Vancouver today (May 30). Unlike traditional retailers that typically manufacture between 50,000 and 100,000 units per item months in advance, Shein starts with small batches—just 100 to 200 garments—based on emerging trends.
Shein then uses data from its e-commerce platform to assess interest, tracking metrics like product hovers, cart additions and social media shares. This real-time feedback enables designers to experiment boldly and helps the company maintain a wide range of styles while minimizing overproduction, said Pernot-Day.
Founded in China and now headquartered in Singapore, Shein is known for its low-cost, trend-driven clothing. The company was valued at as much as $100 billion in 2022. But with a potential IPO in the U.K. on the horizon and mounting economic challenges—including the Trump-era tariffs and the removal of the “de minimis” exemption that previously allowed goods under $800 to enter the U.S. duty-free—Shein is reportedly under pressure to slash its valuation to around $30 billion. The company also continues to face criticism over labor practices, intellectual property disputes and environmental impact.
Still, Pernot-Day argued that Shein’s on-demand production model actually reduces waste. Because the company only manufactures what consumers are likely to buy, he said, excess inventory remains in the “very low single digits.”
In response to claims that Shein fuels overconsumption, Pernot-Day also defended the durability of its products. “Around 68 percent of shoppers wear Shein products multiple, multiple times,” he said, pushing back on the idea that the retailer produces “disposable” fashion. “When you look at the data, when you talk to our customers, they’re keeping our clothes for longer, and the principal way in which they dispose them is through gifting.”
Microsoft president Brad Smith speaks at Web Summit Vancouver 2025 on May 28. Photo By Vaughn Ridley/Web Summit via Sportsfile via Getty Images
The rise of A.I. is frequently described as a new industrial revolution, following the sweeping economic transformations brought by mechanized manufacturing, mass production and the digital age. Because of this, A.I. leaders should study history carefully to avoid repeating past mistakes, Microsoft president Brad Smith said during a talk at Web Summit Vancouver yesterday (May 28).
Smith emphasized the importance of equitable access to emerging technologies. He pointed to the uneven global diffusion of electricity following Thomas Edison’s invention of the light bulb—a foundational moment in the Second Industrial Revolution. “How can it be that we moved from electricity to computers and now A.I., and we haven’t yet even finished diffusing electricity itself?” he asked, noting that hundreds of millions of people still lack access to electricity nearly 150 years later.
Calling this disparity potentially the “greatest tragedy” in the history of technology, Smith urged the A.I. industry to ensure its benefits are distributed more broadly. “We can do better, not just to create better technology, but to bring the benefits of that technology to everyone around the world,” he said.
Widespread A.I. adoption, he argued, depends on significant investment in the infrastructure that underpins innovation—a domain where Microsoft is making a major play. The company plans to invest $80 billion this year in A.I. and data center infrastructure across 40 countries. This infrastructure, alongside platforms and applications, forms what Smith described as the “tech stack” of the A.I. economy.
Past industrial revolutions also built out such tech stacks. The spread of electricity, for example, spurred demand for fuels, turbines, electrical grids, transformers, wiring and appliances—industries that in turn created new categories of employment, said Smith.
He also stressed the critical role of education in unlocking job growth. During the First Industrial Revolution, the U.K. pulled ahead by training workers to use iron and new machinery. The U.S. later surged ahead by producing engineers skilled in electricity and machine tools, and reaped further gains by embracing computer science education during the Information Age. These examples, said Smith, underscore why teaching A.I. skills “will need to become one of the great causes of our industry.”
Ultimately, Smith argued, investment in education—not optimism alone—is what will determine whether A.I. augments human labor or replaces it. “Hope by itself is not a strategy,” he said. “I think history offers some important lessons.”
Jay Graber, head of Bluesky, pictured at Web Summit Vancouver 2025 on May 27. Sam Barnes/Web Summit via Sportsfile via Getty Images
Bluesky, the social networking platform gaining traction as a popular alternative to X, is positioning itself as a hub for personalized—and often niche—online communities. According to CEO Jay Graber, that same focus on customization is what allows Bluesky users to escape the echo chambers that dominate traditional social media.
“You can really just silo in the corner you want to be in,” said Graber during a talk at Web Summit Vancouver yesterday (May 27). Still, features like custom feeds and Bluesky starter packs—which curate lists of users centered on specific interests—can also encourage users to branch out. Graber noted she’s explored communities centered around fountain pens, medical studies, and even commodities trading.
Bluesky users have the option to build timelines that are entirely randomized or intentionally filled with perspectives different from their own. In one instance, an early adopter experimented by randomly sorting users into teams and creating separate feeds to showcase posts from each group, Graber said.
Bluesky’s adaptable framework has contributed to a surge in users, which has climbed to nearly 35 million in recent months, fueled in part by growing discontent with Elon Musk’s ownership of X. Initially incubated by Twitter in 2019, the Seattle-based platform became an independent company two years later, with Jay Graber stepping in as CEO.
How a site with 35 million users is run by just 25 people
Despite its rapid expansion, Bluesky operates with a lean team of just 25 employees, who manage the demands of a budding social media platform in part through strategic delegation. Earlier this year, the company launched a verification system for notable accounts—but it also introduced a “Trusted Verifier” program, enabling approved organizations such as The New York Times and Wired to verify users themselves. “You can have a system that builds more points of trust than just the one that we hold as a company,” said Graber.
Another key difference setting Bluesky apart from its competitors is its stance against traditional advertising. Instead, the company plans to monetize through a marketplace of related services and, eventually, user subscriptions. Its decentralized, open-source structure—which allows users to take their data with them or build parallel networks if dissatisfied—serves as a built-in safeguard against the kind of profit-first decisions, like ad-based models, that Graber said run counter to the company’s mission.
According to Graber, this approach mirrors the open structure of the web itself, where users can install ad blockers or switch search engines whenever they choose. “Creating these constraints for ourselves at the beginning means that we get more creative about how we’re approaching this problem,” she said.