{"version":"https://jsonfeed.org/version/1","title":"Readable: Hacker News","home_page_url":"https://readable.news","feed_url":"https://readable.news/api/feed?format=json","icon":"https://news.ycombinator.com/y18.svg","favicon":"https://news.ycombinator.com/favicon.ico","items":[{"title":"Sam Altman may control our future – can he be trusted?","content_html":"<div class=\"page\" id=\"readability-page-1\"><div data-testid=\"ArticlePageChunks\"><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>In the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, sent secret memos to three fellow-members of the organization’s board of directors. For weeks, they’d been having furtive discussions about whether <a href=\"https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny\">Sam Altman</a>, OpenAI’s C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he’d officiated Brockman’s wedding, in a ceremony at OpenAI’s offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal—creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings—his doubts about Altman increased. As Sutskever put it to another board member at the time, “I don’t think Sam is the guy who should have his finger on the button.”</p><p>At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of&#160;.&#160;.&#160;.” The first item is “Lying.”</p><p>Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and <a href=\"https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule\">Elon Musk</a>, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted.</p></div><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>Altman was in Las Vegas, attending a Formula 1 race, when Sutskever invited him to a video call with the board, then read a brief statement explaining that he was no longer an employee of OpenAI. The board, following legal advice, released a public message saying only that Altman had been removed because he “was not consistently candid in his communications.” Many of OpenAI’s investors and executives were shocked. Microsoft, which had invested some thirteen billion dollars in OpenAI, learned of the plan to fire Altman just moments before it happened. “I was very stunned,” Satya Nadella, Microsoft’s C.E.O., later said. “I couldn’t get anything out of anybody.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI investor and a Microsoft board member, who began calling around to determine whether Altman had committed a clear offense. “I didn’t know what the fuck was going on,” Hoffman told us. “We were looking for embezzlement, or sexual harassment, and I just found nothing.”</p><p>Other business partners were similarly blindsided. When Altman called the investor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was having lunch. “You better get out of here really quick,” she told Conway. OpenAI was on the verge of closing a large investment from Thrive, a venture-capital firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six billion dollars and allow many employees to cash out millions in equity. Kushner emerged from a meeting with Rick Rubin, the music producer, to a missed call from Altman. “We just immediately went to war,” Kushner later said.</p><p>The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”</p><p>With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a <a href=\"https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai\">coup</a> orchestrated by rogue “effective altruists”—adherents of a <a href=\"https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism\">belief system</a> that focusses on maximizing the well-being of humanity, who had come to see A.I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.”) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.</p><figure><p><span><div data-attr-viewport-monitor><a data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https://www.newyorker.com/cartoon/a25456&quot;}\" href=\"https://www.newyorker.com/cartoon/a25456\" rel=\"nofollow noopener\" target=\"_blank\"><picture><img alt=\"Trail boss burns cowboy's watercolor painting.\" loading=\"lazy\" srcset=\"https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_120,c_limit/a25456.jpg 120w, https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_240,c_limit/a25456.jpg 240w, https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_320,c_limit/a25456.jpg 320w, https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_640,c_limit/a25456.jpg 640w, https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_960,c_limit/a25456.jpg 960w, https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_1280,c_limit/a25456.jpg 1280w, https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_1600,c_limit/a25456.jpg 1600w\" sizes=\"100vw\" src=\"https://media.newyorker.com/cartoons/69cc47da5cdb372b560820e4/master/w_1600%2Cc_limit/a25456.jpg\"></picture></a><p><span>Cartoon by Glen Baxter</span></p></div></span></p></figure><p>Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.)</p></div><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>Within hours of the firing, Thrive had put its planned investment on hold and suggested that the deal would be consummated—and employees would thus receive payouts—only if Altman returned. Texts from this period show Altman coördinating closely with Nadella. (“how about: satya and my top priority remains to save openai,” Altman suggested, as the two worked on a statement. Nadella proposed an alternative: “to ensure OpenAI continues to thrive.”) Microsoft soon announced that it would create a competing initiative for Altman and any employees who left OpenAI. A public letter demanding his return circulated at the organization. Some people who hesitated to sign it received imploring calls and messages from colleagues. A majority of OpenAI employees ultimately threatened to leave with Altman.</p><p>The board was backed into a corner. “Control Z, that’s one option,” Toner said—undo the firing. “Or the other option is the company falls apart.” Even Murati eventually signed the letter. Altman’s allies worked to win over Sutskever. Brockman’s wife, Anna, approached him at the office and pleaded with him to reconsider. “You’re a good person—you can fix this,” she said. Sutskever later explained, in a court deposition, “I felt that if we were to go down the path where Sam would not return, then OpenAI would be destroyed.” One night, Altman took an Ambien, only to be awakened by his husband, an Australian coder named Oliver Mulherin, who told him that Sutskever was wavering, and that people were telling Altman to speak with the board. “I woke up in this, like, crazy Ambien haze, and I was so disoriented,” Altman told us. “I was, like, I cannot talk to the board right now.”</p><p>In a series of increasingly tense calls, Altman demanded the resignations of board members who had moved to fire him. “I have to pick up the pieces of their mess while I’m in this crazy cloud of suspicion?” Altman recalled initially thinking, about his return. “I was just, like, Absolutely fucking not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole original member who remained. As a condition of their exit, the departing members demanded that the allegations against Altman—including that he pitted executives against one another and concealed his financial entanglements—be investigated. They also pressed for a new board that could oversee the outside inquiry with independence. But the two new members, the former Harvard president Lawrence Summers and the former Facebook C.T.O. Bret Taylor, were selected after close conversations with Altman. “would you do this,” Altman texted Nadella. “bret, larry summers, adam as the board and me as ceo and then bret handles the investigation.” (McCauley later testified in a deposition that when Taylor was previously considered for a board seat she’d had concerns about his deference to Altman.)</p><p>Less than five days after his firing, Altman was reinstated. Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. But the debate over Altman’s trustworthiness has moved beyond OpenAI’s boardroom. The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. “We need institutions worthy of the power they wield,” Murati told us. “The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it.” Altman’s allies, on the other hand, have long dismissed the accusations. After the firing, Conway texted Chesky and Lehane demanding a public-relations offensive. “This is REPUTATIONAL TO SAM,” he wrote. He told the Washington <em>Post</em> that Altman had been “mistreated by a rogue board of directors.”</p><p>OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of A.I. infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how A.I. is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.</p></div><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>Altman has promoted OpenAI’s growth by touting a vision in which, he wrote in a 2024 blog post, “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” His rhetoric has helped sustain one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies, and many experts—at times including Altman—have warned that the industry is in a bubble. “Someone is going to lose a phenomenal amount of money,” he told reporters last year. If the bubble pops, economic catastrophe may follow. If his most bullish projections prove correct, he may become one of the wealthiest and most powerful people on the planet.</p><p>In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’&#160;” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’&#160;” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?</p><p>One morning this winter, we met Altman at OpenAI’s headquarters, in San Francisco, for one of more than a dozen conversations with him for this story. The company had recently moved into a pair of eleven-story glass towers, one of which had been occupied by Uber, another tech behemoth, whose co-founder and C.E.O., Travis Kalanick, seemed like an unstoppable prodigy—until he resigned, in 2017, under pressure from investors, who cited concerns about his ethics. (Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”)</p><p>An employee gave us a tour of the office. In an airy space full of communal tables, there was an animated digital painting of the computer scientist Alan Turing; its eyes tracked us as we passed. The installation is a winking reference to the Turing test, the 1950 thought experiment about whether a machine can credibly imitate a person. (In a 2025 study, ChatGPT passed the test more reliably than actual humans did.) Typically, you can interact with the painting. But the sound had been disabled, our guide told us, because it wouldn’t stop eavesdropping on employees and then butting into their conversations. Elsewhere in the office, plaques, brochures, and merchandise displayed the words “Feel the AGI.” The phrase was originally associated with Sutskever, who used it to caution his colleagues about the risks of artificial general intelligence—the threshold at which machines match human cognitive capacities. After the Blip, it became a cheerful slogan hailing a superabundant future.</p></div><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>We met Altman in a generic-looking conference room on the eighth floor. “People used to tell me about decision fatigue, and I didn’t get it,” Altman told us. “Now I wear a gray sweater and jeans every day, and even picking which gray sweater out of my closet—I’m, like, I wish I didn’t have to think about that.” Altman has a youthful appearance—he is slender, with wide-set blue eyes and tousled hair—but he is now forty, and he and Mulherin have a one-year-old son, delivered by a surrogate. “I’m sure, like, being President of the United States would be a much more stressful job, but of all the jobs that I think I could reasonably do, this is the most stressful one I can imagine,” he said, making eye contact with one of us, then with the other. “The way that I’ve explained this to my friends is: ‘This was the most fun job in the world until the day we launched ChatGPT.’ We were making these massive scientific discoveries—I think we did the most important piece of scientific discovery in, I don’t know, many decades.” He cast his eyes down. “And then, since the launch of ChatGPT, the decisions have gotten very difficult.”</p><p>Altman grew up in Clayton, Missouri, an affluent suburb of St. Louis, as the eldest of four siblings. His mother, Connie Gibstine, is a dermatologist; his father, Jerry Altman, was a real-estate broker and a housing activist. Altman attended a Reform synagogue and a private preparatory school that he has described as “not the kind of place where you would really stand up and talk about being gay.” In general, though, the family’s wealthy suburban circles were relatively liberal. When Altman was sixteen or seventeen, he said, he was out late in a predominantly gay neighborhood in St. Louis and was subjected to a brutal physical attack and homophobic slurs. Altman did not report the incident, and he was reluctant to give us more details on the record, saying that a fuller telling would “make me look like I’m manipulative or playing for sympathy.” He dismissed the idea that this event, and his sexuality broadly, was significant to his identity. But, he said, “probably that has, like, some deep-seated psychological thing—that I think I’m over but I’m not—about not wanting more conflict.”</p><p>Altman’s attitude in childhood, his brother told <em>The</em> <em>New Yorker</em>, in 2016, was “I have to win, and I’m in charge of everything.” He went to Stanford, where he attended regular off-campus poker games. “I think I learned more about life and business from that than I learned in college,” he later said.</p><p>All Stanford students are ambitious, but many of the most enterprising among them drop out. The summer after his sophomore year, Altman went to Massachusetts to join the inaugural batch of entrepreneurs at Y Combinator, a “startup incubator” co-founded by the renowned software engineer Paul Graham. Each entrant joined Y.C. with an idea for a startup. (Altman’s batch mates included founders of Reddit and Twitch.) Altman’s project, eventually called Loopt, was a proto social network that used the locations of people’s flip phones to tell their friends where they were. The company reflected his drive, and a tendency to interpret ambiguous situations to his advantage. Federal rules required that phone carriers be able to track the locations of phones for emergency services; Altman struck deals with carriers to tap these capabilities for the company’s use.</p><figure><p><span><div data-attr-viewport-monitor><a data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https://www.newyorker.com/cartoon/a60754&quot;}\" href=\"https://www.newyorker.com/cartoon/a60754\" rel=\"nofollow noopener\" target=\"_blank\"><picture><img alt=\"Man presenting graph with declining revenue at meeting.\" loading=\"lazy\" srcset=\"https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_120,c_limit/a60754.jpg 120w, https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_240,c_limit/a60754.jpg 240w, https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_320,c_limit/a60754.jpg 320w, https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_640,c_limit/a60754.jpg 640w, https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_960,c_limit/a60754.jpg 960w, https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_1280,c_limit/a60754.jpg 1280w, https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_1600,c_limit/a60754.jpg 1600w\" sizes=\"100vw\" src=\"https://media.newyorker.com/cartoons/69b9c0dccd8fdd362aec00f2/master/w_1600%2Cc_limit/a60754.jpg\"></picture></a><p><span>“These numbers indicate that somebody here has the soul of a poet.”</span></p><p><span>Cartoon by Emily Flake</span></p></div></span></p></figure><p>Most of Altman’s employees at Loopt liked him, but some said that they were struck by his tendency to exaggerate, even about trivial things. One recalled Altman bragging widely that he was a champion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then proving to be one of the worst players in the office. (Altman says that he was probably joking.) As Mark Jacobstein, an older Loopt employee who was asked by investors to act as Altman’s “babysitter,” later told Keach Hagey, for “<a data-offer-url=\"https://www.amazon.com/Optimist-Altman-OpenAI-Invent-Future/dp/1324075961\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https://www.amazon.com/Optimist-Altman-OpenAI-Invent-Future/dp/1324075961&quot;}\" href=\"https://www.amazon.com/Optimist-Altman-OpenAI-Invent-Future/dp/1324075961\" rel=\"nofollow noopener\" target=\"_blank\" data-aps-asin=\"1324075961\" data-aps-asc-tag>The Optimist</a>,” a biography of Altman, “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraudulent startup.</p></div><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>Groups of senior employees, concerned with Altman’s leadership and lack of transparency, asked Loopt’s board on two occasions to fire him as C.E.O., according to Hagey. But Altman inspired fierce loyalty, too. A former employee was told that a board member responded, “This is Sam’s company, get back to fucking work.” (A board member denied that the attempts to remove Altman as C.E.O. were serious.) Loopt struggled to gain users, and in 2012 it was acquired by a fintech company. The acquisition had been arranged, according to a person familiar with the deal, largely to help Altman save face. Still, by the time Graham retired from Y.C., in 2014, he had recruited Altman to be his successor as president. “I asked Sam in our kitchen,” Graham told <em>The New Yorker</em>. “And he smiled, like, <em>it worked</em>. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”</p><p>Altman’s new role made him, at twenty-eight, a kingmaker. His job was to select the hungriest and most promising entrepreneurs, connect them with the best coders and investors, and help them develop their startups into industry-defining monopolies (while Y.C. took a six- or seven-per-cent cut). Altman oversaw a period of aggressive expansion, growing Y.C.’s roster of startups from dozens to hundreds. But several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to “make personal investments, selectively, into the best companies, blocking outside investors.” (Altman denies blocking anyone.) Altman had worked as a “scout” for the investment fund Sequoia Capital, as part of a program that involved investing in early-stage startups and taking a small cut of any profits. When Altman made an angel investment in Stripe, a financial-services startup, he insisted on a bigger portion, galling Sequoia’s partners, a person familiar with the deal said. The person added, “It’s a policy of ‘Sam first.’&#160;” Altman is an investor in, by his own estimate, some four hundred other companies. (Altman denies this characterization of the Stripe deal. Around 2010, he made an initial investment of fifteen thousand dollars in Stripe, a two-per-cent share. The company is now valued at more than a hundred and fifty billion dollars.)</p><p>By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice. Altman told some Y.C. partners that he would resign as president but become chairman instead. In May, 2019, a blog post announcing that Y.C. had a new president came with an asterisk: “Sam is transitioning to Chairman of YC.” A few months later, the post was edited to read “Sam Altman stepped away from any formal position at YC”; after that, the phrase was removed entirely. Nevertheless, as recently as 2021, a Securities and Exchange Commission filing listed Altman as the chairman of Y Combinator. (Altman says that he wasn’t aware of this until much later.)</p></div><div data-testid=\"BodyWrapper\" data-journey-hook=\"grid-wrapper\"><p>Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”</p><p>In May, 2015, Altman e-mailed Elon Musk, then the hundredth-richest person in the world. Like many prominent Silicon Valley entrepreneurs, Musk was preoccupied by an array of threats that he considered existentially urgent but which would have struck most people as far-fetched hypotheticals. “We need to be super careful with AI,” he tweeted. “Potentially more dangerous than nukes.”</p><p>Altman had generally been a techno-optimist, but his rhetoric about A.I. soon turned <a href=\"https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai\">apocalyptic</a>. In public, and in his private correspondence with Musk and others, he warned that the technology should not be dominated by a profit-seeking mega-corporation. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” he wrote to Musk. “If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.” Picking up on the analogy to nuclear weapons, he proposed a “Manhattan Project for AI.” He outlined the overarching principles that such an organization would have—“safety should be a first-class requirement”; “obviously we’d comply with/aggressively support all regulation”—and he and Musk settled on a name: OpenAI.</p><p>Unlike the original Manhattan Project, a government initiative that led to the creation of the atom bomb, OpenAI would be privately funded, at least at first. Altman predicted that an artificial superintelligence—a theoretical threshold beyond even A.G.I., at which machines would fully eclipse the capabilities of the human mind—would eventually create enough economic benefits to “capture the light cone of all future value in the universe.” But he also...</p></div></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.newyorker.com%2Fmagazine%2F2026%2F04%2F13%2Fsam-altman-may-control-our-future-can-he-be-trusted\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI, Ronan Farrow and Andrew Marantz write.","image":"https://media.newyorker.com/photos/69cd326ac6ea0f4558d6e181/16:9/w_1280,c_limit/r47927.jpg","authors":[{"name":"The New Yorker","url":"https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted","avatar":"https://www.newyorker.com/verso/static/thenewyorker-us/assets/favicon.ico"}],"id":"47659135","url":"https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted","external_url":"https://news.ycombinator.com/item?id=47659135","date_published":"2026-04-06T10:36:57Z"},{"title":"Project Glasswing: Securing critical software for the AI era","content_html":"<div class=\"page\" id=\"readability-page-1\"><div><h2 id=\"introduction\">Introduction</h2><h4>Today we’re announcing Project Glasswing<sup>1</sup>, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software.</h4><p>We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos<sup>2</sup> Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.</p><p>Mythos Preview has already found thousands of high-severity vulnerabilities, including some in <em>every major operating system and web browser</em>. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.</p><p>As part of Project Glasswing, the launch partners listed above will use Mythos Preview as part of their defensive security work; Anthropic will share what we learn so the whole industry can benefit. We have also extended access to a group of over 40 additional organizations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems. Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.</p><p>Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.</p></div><div><h2 id=\"cybersecurity-in-the-age-of-ai\">Cybersecurity in the age of AI</h2><p>The software that all of us rely on every day—responsible for running banking systems, storing medical records, linking up logistics networks, keeping power grids functioning, and much more—has always contained bugs. Many are minor, but some are serious security flaws that, if discovered, could allow cyberattackers to hijack systems, disrupt operations, or steal data.</p><p>We have already seen the serious consequences of cyberattacks for important <a href=\"https://cloud.google.com/blog/topics/threat-intelligence/oracle-ebusiness-suite-zero-day-exploitation\">corporate networks</a>, <a href=\"https://www.nao.org.uk/reports/investigation-wannacry-cyber-attack-and-the-nhs/\">healthcare systems</a>, <a href=\"https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-weve-learned-what-weve-done-over-past-two-years\">energy infrastructure</a>, <a href=\"https://www.lawfaremedia.org/article/lessons-from-the-european-airports-ransomware-attack\">transport hubs</a>, and the information security of <a href=\"https://www.reuters.com/world/us/hackers-solarwinds-breach-stole-data-us-sanctions-policy-intelligence-probes-2021-10-07/\">government</a> <a href=\"https://www.reuters.com/technology/cybersecurity/us-treasurys-workstations-hacked-cyberattack-by-china-afp-reports-2024-12-30/\">agencies</a> <a href=\"https://lordslibrary.parliament.uk/cyber-security-and-the-uk-government/\">across</a> the world. On the global stage, state-sponsored attacks from actors like China, Iran, North Korea, and Russia have threatened to compromise the infrastructure that underpins both civilian life and military readiness. Even smaller-scale attacks, such as those where individual <a href=\"https://www.sciencedirect.com/science/article/pii/S2950386825000103\">hospitals</a> or <a href=\"https://www.vic.gov.au/cyber-incident-impacting-victorian-government-schools\">schools</a> are targeted, can still inflict substantial economic damage, expose sensitive data, and even put lives at risk. The current global financial costs of cybercrime are challenging to estimate, but might be <a href=\"https://www.governance.ai/research-paper/estimating-global-yearly-cybercrime-damage-costs\">around $500B</a> every year.</p><p>Many flaws in software go unnoticed for years because finding and exploiting them has required expertise held by only a few skilled security experts. With the latest frontier AI models, the cost, effort, and level of expertise required to find and exploit software vulnerabilities have all dropped dramatically. <a href=\"https://www.anthropic.com/research/building-ai-cyber-defenders\">Over the past year</a>, AI models have become increasingly effective at reading and reasoning about code—in particular, they show a striking ability to spot <a href=\"https://red.anthropic.com/2026/firefox/\">vulnerabilities</a> and work out ways to <a href=\"https://red.anthropic.com/2026/exploit/\">exploit</a> them. Claude Mythos Preview demonstrates a leap in these cyber skills—the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated.</p><p>Ten years after the first <a href=\"https://www.darpa.mil/research/programs/cyber-grand-challenge\">DARPA Cyber Grand Challenge</a>, frontier AI models are now becoming competitive with the best humans at finding and exploiting vulnerabilities. Without the <a href=\"https://openai.com/index/strengthening-cyber-resilience/\">necessary safeguards</a>, these powerful cyber capabilities could be used to exploit the many existing flaws in the world’s most important software. This could make cyberattacks of all kinds much more frequent and destructive, and empower adversaries of the United States and its allies. Addressing these issues is therefore an important security priority for democratic states.</p><p>Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs. Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.<br></p></div><div><h2 id=\"identifying-vulnerabilities-and-exploits-with-claude-mythos-preview\">Identifying vulnerabilities and exploits with Claude Mythos Preview</h2><p>Over the past few weeks, we have used Claude Mythos Preview to identify thousands of zero-day vulnerabilities (that is, flaws that were previously unknown to the software’s developers), many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software.</p><p>In a post on our <a href=\"https://red.anthropic.com/2026/mythos-preview \" target=\"_blank\" rel=\"noopener noreferrer\">Frontier Red Team blog</a>, we provide technical details for a subset of these vulnerabilities that have already been patched and, in some cases, the ways that Mythos Preview found to exploit them. It was able to identify nearly all of these vulnerabilities—and develop many related exploits—entirely autonomously, without any human steering. The following are three examples:</p><ul><li>Mythos Preview found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world and is used to run firewalls and other critical infrastructure. The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it;</li><li>It also discovered a 16-year-old vulnerability in FFmpeg—which is used by innumerable pieces of software to encode and decode video—in a line of code that automated testing tools had hit five million times without ever catching the problem;</li><li>The model autonomously found and chained together several vulnerabilities in the Linux kernel—the software that runs most of the world’s servers—to allow an attacker to escalate from ordinary user access to complete control of the machine.</li></ul><p>We have reported the above vulnerabilities to the maintainers of the relevant software, and they have all now been patched. For many other vulnerabilities, we are providing a cryptographic hash of the details today (see the Red Team blog), and we will reveal the specifics after a fix is in place.</p><p>Evaluation benchmarks such as CyberGym reinforce the substantial difference between Mythos Preview and our next-best model, Claude Opus 4.6:</p><div><p>Cybersecurity Vulnerability Reproduction</p></div><p>In addition to our own work, many of our partners have already been using Claude Mythos Preview for several weeks. This is what they’ve found:</p><div><div data-active=\"true\" data-position=\"active\"><p>“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient. Providers of technology must aggressively adopt new approaches now, and customers need to be ready to deploy. That is why Cisco joined Project Glasswing—this work is too important and too urgent to do alone.”</p></div><div data-active=\"false\" data-position=\"right\"><p>“At AWS, we build defenses before threats emerge, from our custom silicon up through the technology stack. Security isn't a phase for us; it's continuous and embedded in everything we do. Our teams analyze over 400 trillion network flows every day for threats, and AI is central to our ability to defend at scale. We've been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it's already helping us strengthen our code. We're bringing deep security expertise to our partnership with Anthropic and are helping to harden Claude Mythos Preview so even more organizations can advance their most ambitious work with security that sets the standard.”</p></div><div data-active=\"false\" data-position=\"right\"><p>“As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft. When tested against CTI-REALM, our open-source security benchmark, Claude Mythos Preview showed substantial improvements compared to previous models. We look forward to partnering with Anthropic and the broader industry to improve security outcomes for all.”</p><div><div><p>Igor Tsyganskiy</p><p>EVP of Cybersecurity and Microsoft Research, Microsoft</p></div><a href=\"https://www.microsoft.com/en-us/msrc/blog/2026/04/strengthening-secure-software-global-scale-how-msrc-is-evolving-with-ai\" target=\"_blank\" rel=\"noopener noreferrer\">Read announcement<svg width=\"12\" height=\"12\" viewbox=\"0 0 13 13\"><path d=\"M5.85 0C6.09853 0 6.3 0.201472 6.3 0.45C6.3 0.698528 6.09853 0.9 5.85 0.9H1.35C1.10147 0.9 0.9 1.10147 0.9 1.35V11.25C0.9 11.4985 1.10147 11.7 1.35 11.7H11.25C11.4985 11.7 11.7 11.4985 11.7 11.25V6.75C11.7 6.50147 11.9015 6.3 12.15 6.3C12.3985 6.3 12.6 6.50147 12.6 6.75V11.25C12.6 11.9956 11.9956 12.6 11.25 12.6H1.35C0.604416 12.6 1.81197e-08 11.9956 0 11.25V1.35C0 0.604416 0.604416 1.44959e-08 1.35 0H5.85ZM12.15 0C12.1836 -2.39063e-08 12.2172 0.00392809 12.2502 0.0114258C12.2712 0.0162292 12.2917 0.0230317 12.3117 0.0307617C12.3183 0.0333063 12.3246 0.036683 12.3311 0.0395508C12.3492 0.0475468 12.3668 0.056447 12.3838 0.0667969C12.3907 0.0710271 12.3982 0.0744632 12.4049 0.0791016C12.4274 0.0945678 12.4486 0.11224 12.4682 0.131836L12.5262 0.202148C12.536 0.217047 12.542 0.233889 12.5499 0.249609C12.555 0.259711 12.5614 0.269045 12.5657 0.279492C12.5811 0.31691 12.5899 0.355926 12.5947 0.395508C12.5969 0.413573 12.6 0.43161 12.6 0.45V4.05C12.6 4.29853 12.3985 4.5 12.15 4.5C11.9015 4.5 11.7 4.29853 11.7 4.05V1.53633L7.96816 5.26816C7.79243 5.4439 7.50757 5.4439 7.33184 5.26816C7.1561 5.09243 7.1561 4.80757 7.33184 4.63184L11.0637 0.9H8.55C8.30147 0.9 8.1 0.698528 8.1 0.45C8.1 0.201472 8.30147 4.25227e-08 8.55 0H12.15Z\" fill=\"currentColor\"/></svg></a></div></div><div data-active=\"false\" data-position=\"right\"><p>“The window between a vulnerability being discovered and being exploited by an adversary has collapsed—what once took months now happens in minutes with AI. Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities. That is not a reason to slow down; it’s a reason to move together, faster. If you want to deploy AI, you need security. That is why CrowdStrike is part of this effort from day one.”</p></div><div data-active=\"false\" data-position=\"right\"><p>“In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers—whose software underpins much of the world’s critical infrastructure—have historically been left to figure out security on their own. Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.”</p></div><div data-active=\"false\" data-position=\"right\"><p>“Promoting the cybersecurity and resiliency of the financial system is central to JPMorganChase's mission, and we believe the industry is strongest when leading institutions work together on shared challenges. Project Glasswing provides a unique, early stage opportunity to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure both on our own terms and alongside respected technology leaders. We will take a rigorous, independent approach to determining how to proceed and where we can help. Anthropic's initiative reflects the kind of forward-looking, collaborative approach that this moment demands.”</p><div><p>Pat Opet</p><p>Chief Information Security Officer, JPMorganChase</p></div></div><div data-active=\"false\" data-position=\"right\"><p>“Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It's always been critical that the industry work together on emerging security issues, whether it's post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks. We have long believed that AI poses new challenges and opens new opportunities in cyber defense, which is why we've built AI-powered tools—such as Big Sleep and CodeMender—to find and fix critical software flaws. We will continue investing in our leading cybersecurity platform and a culture focused on protecting users, customers, the ecosystem, and national security.”</p></div><div data-active=\"false\" data-position=\"right\"><p>“Over the past few weeks, we’ve had access to the Claude Mythos Preview model, using it to identify complex vulnerabilities that prior-generation models missed entirely. This is not only a game changer for finding previously hidden vulnerabilities, but it also signals a dangerous shift where attackers can soon find even more zero-day vulnerabilities and develop exploits faster than ever before. It’s clear that these models need to be in the hands of open source owners and defenders everywhere to find and fix these vulnerabilities before attackers get access. Perhaps even more important: everyone needs to prepare for AI-assisted attackers. There will be more attacks, faster attacks, and more sophisticated attacks. Now is the time to modernize cybersecurity stacks everywhere. We commend Anthropic for partnering with the industry to ensure these powerful capabilities prioritize defense first.”</p></div></div><p>The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills. For example, as shown in the evaluation results below, the model has the highest scores of any model yet developed on a variety of software coding tasks.</p><div><div data-active=\"true\"><p>• SWE-bench Verified, Pro, and Multilingual: Our memorization screens flag a subset of problems in these SWE-bench evals. Excluding any problems that show signs of memorization, Mythos Preview’s margin of improvement over Opus 4.6 holds. •&#160;SWE-bench Multimodal: We used an internal implementation for both Mythos Preview and Opus 4.6. Scores are not directly comparable to public leaderboard scores. •&#160;Terminal-Bench 2.0: We used the Terminus-2 harness with adaptive thinking at maximum effort and a total task budget of 1 million tokens for each task. All experiments used 1× guaranteed/3× ceiling resource allocation averaged over five attempts per task. Mythos Preview scored 92.1% when we increased timeout limits to four hours and used the Terminal-Bench 2.1 updates.</p></div><div data-active=\"false\"><div><div><p><span data-bar-label=\"true\"><span>Mythos Preview without tools</span></span></p></div><div><p><span data-bar-label=\"true\"><span>Mythos Preview with tools</span></span></p></div></div><p>Humanity’s Last Exam: We have found Mythos still performs well on HLE at low effort, which could indicate some level of memorization.</p></div><div data-active=\"false\"><p>BrowseComp: Claude Mythos Preview scores higher than Opus 4.6 while using 4.9× fewer tokens.</p></div></div><p>More information on the model’s capabilities, its safety properties, and its general characteristics can be found in the <a href=\"https://anthropic.com/claude-mythos-preview-system-card\" target=\"_blank\" rel=\"noopener noreferrer\">Claude Mythos Preview system card</a>.</p><p>We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview<sup>3</sup>.</p><h2 id=\"plans-for-project-glasswing\">Plans for Project Glasswing</h2><p>Today’s announcement is the beginning of a longer-term effort. To be successful, it will require broad involvement from across the technology industry and beyond.</p><p>Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems—systems that represent a very large portion of the world’s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems.</p><p>Anthropic’s commitment of $100M in model usage credits to Project Glasswing and additional participants will cover substantial usage throughout this research preview. Afterward, Claude Mythos Preview will be available to participants at $25/$125 per million input/output tokens (participants can access the model on the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry).</p><p>In addition to our commitment of model usage credits, we’ve donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable the maintainers of open-source software to respond to this changing landscape (maintainers interested in access can apply through the <a href=\"https://claude.com/contact-sales/claude-for-oss\">Claude for Open Source</a> program).</p><p>We intend for this work to grow in scope and continue for many months, and we’ll share as much as we can so that other organizations can apply the lessons to their own security. Partners will, to the extent they’re able, share information and best practices with each other; within 90 days, Anthropic will report publicly on what we’ve learned, as well as the vulnerabilities fixed and improvements made that can be disclosed. We will also collaborate with leading security organizations to produce a set of practical recommendations for how security practices should evolve in the AI era. This will potentially include:</p><ul><li>Vulnerability disclosure processes;</li><li>Software update processes;</li><li>Open-source and supply-chain security;</li><li>Software development lifecycle and secure-by-design practices;</li><li>Standards for regulated industries;</li><li>Triage scaling and automation; and</li><li>Patching automation.</li></ul><p>Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. As we noted above, securing critical infrastructure is a top national security priority for democratic countries—the emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology. Governments have an essential role to play in helping maintain that lead, and in both assessing and mitigating the national security risks associated with AI models. We are ready to work with local, state, and federal representatives to assist in these tasks.</p><p>We are hopeful that Project Glasswing can seed a larger effort across industry and the public sector, with all parties helping to address the biggest questions around the impact of powerful models on security. We invite other AI industry members to join us in helping to set the standards for the industry. In the medium term, an independent, third-party body—one that can bring together private- and public-sector organizations—might be the ideal home for continued work on these large-scale cybersecurity projects.</p></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.anthropic.com%2Fglasswing\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"A new initiative to secure the world’s most critical software and give defenders a durable advantage in the coming AI-driven era of cybersecurity.","image":"https://cdn.sanity.io/images/4zrzovbb/website/65641e7846f10255c4a3415a10bbf5793ae87b13-1200x630.jpg","authors":[{"name":null,"url":"https://www.anthropic.com/glasswing","avatar":"https://www.anthropic.com/favicon.ico"}],"id":"47679121","url":"https://www.anthropic.com/glasswing","external_url":"https://news.ycombinator.com/item?id=47679121","date_published":"2026-04-07T18:09:34Z"},{"title":"Issue: Claude Code is unusable for complex engineering tasks with Feb updates","content_html":"<div class=\"page\" id=\"readability-page-1\"><div data-turbolinks=\"false\" data-team-hovercards-enabled=\"true\" data-testid=\"markdown-body\"><h3 dir=\"auto\">Preflight Checklist</h3> <ul> <li> I have searched <a href=\"https://github.com/anthropics/claude-code/issues?q=is%3Aissue state%3Aopen label%3Amodel\">existing issues</a> for similar behavior reports</li> <li> This report does NOT contain sensitive information (API keys, passwords, etc.)</li> </ul> <h3 dir=\"auto\">Type of Behavior Issue</h3> <p dir=\"auto\">Other unexpected behavior</p> <h3 dir=\"auto\">What You Asked Claude to Do</h3> <p dir=\"auto\">Claude has regressed to the point it cannot be trusted to perform complex engineering.</p> <h3 dir=\"auto\">What Claude Actually Did</h3> <ol dir=\"auto\"> <li>Ignores instructions</li> <li>Claims \"simplest fixes\" that are incorrect</li> <li>Does the opposite of requested activities</li> <li>Claims completion against instructions</li> </ol> <h3 dir=\"auto\">Expected Behavior</h3> <p dir=\"auto\">Claude should behave like it did in January.</p> <h3 dir=\"auto\">Files Affected</h3> <figure><pre></pre></figure> <h3 dir=\"auto\">Permission Mode</h3> <p dir=\"auto\">Accept Edits was ON (auto-accepting changes)</p> <h3 dir=\"auto\">Can You Reproduce This?</h3> <p dir=\"auto\">Yes, every time with the same prompt</p> <h3 dir=\"auto\">Steps to Reproduce</h3> <p dir=\"auto\"><em>No response</em></p> <h3 dir=\"auto\">Claude Model</h3> <p dir=\"auto\">Opus</p> <h3 dir=\"auto\">Relevant Conversation</h3> <figure><pre></pre></figure> <h3 dir=\"auto\">Impact</h3> <p dir=\"auto\">High - Significant unwanted changes</p> <h3 dir=\"auto\">Claude Code Version</h3> <p dir=\"auto\">Various/all</p> <h3 dir=\"auto\">Platform</h3> <p dir=\"auto\">Anthropic API</p> <h3 dir=\"auto\">Additional Context</h3> <h2 dir=\"auto\">We have a very consistent, high complexity work environment and data mined months of logs to understand why -- essentially -- starting in February, we have noticed a degradation performing complex engineering tasks. Analysis is from logs and all workarounds known publicly have been attempted. Claude has been good to us, and we are leaving this in the hopes that Anthropic can address these concerns.</h2> <h2 dir=\"auto\">Extended Thinking Is Load-Bearing for Senior Engineering Workflows</h2> <p dir=\"auto\">This analysis was produced by Claude by analyzing session log data from January through March.</p> <h2 dir=\"auto\">Summary</h2> <p dir=\"auto\">Quantitative analysis of 17,871 thinking blocks and 234,760 tool calls across<br> 6,852 Claude Code session files reveals that the rollout of thinking content<br> redaction (<code>redact-thinking-2026-02-12</code>) correlates precisely with a measured<br> quality regression in complex, long-session engineering workflows.</p> <p dir=\"auto\">The data suggests that extended thinking tokens are not a \"nice to have\" but<br> are structurally required for the model to perform multi-step research,<br> convention adherence, and careful code modification. When thinking depth is<br> reduced, the model's tool usage patterns shift measurably from research-first<br> to edit-first behavior, producing the quality issues users have reported.</p> <p dir=\"auto\">This report provides data to help Anthropic understand which workflows are<br> most affected and why, with the goal of informing decisions about thinking<br> token allocation for power users.</p> <h2 dir=\"auto\">1. Thinking Redaction Timeline Matches Quality Regression</h2> <p dir=\"auto\">Analysis of thinking blocks in session JSONL files:</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Thinking Visible</th> <th>Thinking Redacted</th> </tr> </thead> <tbody> <tr> <td>Jan 30 - Mar 4</td> <td>100%</td> <td>0%</td> </tr> <tr> <td>Mar 5</td> <td>98.5%</td> <td>1.5%</td> </tr> <tr> <td>Mar 7</td> <td>75.3%</td> <td>24.7%</td> </tr> <tr> <td><strong>Mar 8</strong></td> <td><strong>41.6%</strong></td> <td><strong>58.4%</strong></td> </tr> <tr> <td>Mar 10-11</td> <td>&lt;1%</td> <td>&gt;99%</td> </tr> <tr> <td>Mar 12+</td> <td>0%</td> <td>100%</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">The quality regression was independently reported on March 8 — the exact date<br> redacted thinking blocks crossed 50%. The rollout pattern (1.5% → 25% → 58% →<br> 100% over one week) is consistent with a staged deployment.</p> <h2 dir=\"auto\">2. Thinking Depth Was Declining Before Redaction</h2> <p dir=\"auto\">The <code>signature</code> field on thinking blocks has a <strong>0.971 Pearson correlation</strong><br> with thinking content length (measured from 7,146 paired samples where both<br> are present). This allows estimation of thinking depth even after redaction.</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Est. Median Thinking (chars)</th> <th>vs Baseline</th> </tr> </thead> <tbody> <tr> <td>Jan 30 - Feb 8 (baseline)</td> <td>~2,200</td> <td>—</td> </tr> <tr> <td>Late February</td> <td>~720</td> <td>-67%</td> </tr> <tr> <td>March 1-5</td> <td>~560</td> <td>-75%</td> </tr> <tr> <td>March 12+ (fully redacted)</td> <td>~600</td> <td>-73%</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">Thinking depth had already dropped ~67% by late February, before redaction<br> began. The redaction rollout in early March made this invisible to users.</p> <h2 dir=\"auto\">3. Behavioral Impact: Measured Quality Metrics</h2> <p dir=\"auto\">These metrics were computed independently from 18,000+ user prompts before<br> the thinking analysis was performed.</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Metric</th> <th>Before Mar 8</th> <th>After Mar 8</th> <th>Change</th> </tr> </thead> <tbody> <tr> <td>Stop hook violations (laziness guard)</td> <td>0</td> <td>173</td> <td>0 → 10/day</td> </tr> <tr> <td>Frustration indicators in user prompts</td> <td>5.8%</td> <td>9.8%</td> <td>+68%</td> </tr> <tr> <td>Ownership-dodging corrections needed</td> <td>6</td> <td>13</td> <td>+117%</td> </tr> <tr> <td>Prompts per session</td> <td>35.9</td> <td>27.9</td> <td>-22%</td> </tr> <tr> <td>Sessions with reasoning loops (5+)</td> <td>0</td> <td>7</td> <td>0 → 7</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">A stop hook (<code>stop-phrase-guard.sh</code>) was built to programmatically catch<br> ownership-dodging, premature stopping, and permission-seeking behavior.<br> It fired 173 times in 17 days after March 8. It fired zero times before.</p> <h2 dir=\"auto\">4. Tool Usage Shift: Research-First → Edit-First</h2> <p dir=\"auto\">Analysis of 234,760 tool invocations shows the model stopped reading code<br> before modifying it.</p> <h3 dir=\"auto\">Read:Edit Ratio (file reads per file edit)</h3> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Read:Edit</th> <th>Research:Mutation</th> <th>Read %</th> <th>Edit %</th> </tr> </thead> <tbody> <tr> <td>Good (Jan 30 - Feb 12)</td> <td><strong>6.6</strong></td> <td>8.7</td> <td>46.5%</td> <td>7.1%</td> </tr> <tr> <td>Transition (Feb 13 - Mar 7)</td> <td>2.8</td> <td>4.1</td> <td>37.7%</td> <td>13.2%</td> </tr> <tr> <td>Degraded (Mar 8 - Mar 23)</td> <td><strong>2.0</strong></td> <td>2.8</td> <td>31.0%</td> <td>15.4%</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">The model went from <strong>6.6 reads per edit</strong> to <strong>2.0 reads per edit</strong> — a 70%<br> reduction in research before making changes.</p> <p dir=\"auto\">In the good period, the model's workflow was: read the target file, read<br> related files, grep for usages across the codebase, read headers and tests,<br> then make a precise edit. In the degraded period, it reads the immediate<br> file and edits, often without checking context.</p> <h3 dir=\"auto\">Weekly Trend</h3> <figure><pre><code>Week Read:Edit Research:Mutation ────────────────────────────────────────── Jan 26 21.8 30.0 Feb 02 6.3 8.1 Feb 09 5.2 7.1 Feb 16 2.8 4.1 Feb 23 3.2 4.5 Mar 02 2.5 3.7 Mar 09 2.2 3.3 Mar 16 1.7 2.1 ← lowest Mar 23 2.0 3.0 Mar 30 1.6 2.6 </code></pre></figure> <p dir=\"auto\">The decline in research effort begins in mid-February — the same period when<br> estimated thinking depth dropped 67%.</p> <h3 dir=\"auto\">Write vs Edit (surgical precision)</h3> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Write % of mutations</th> </tr> </thead> <tbody> <tr> <td>Good (Jan 30 - Feb 12)</td> <td>4.9%</td> </tr> <tr> <td>Degraded (Mar 8 - Mar 23)</td> <td>10.0%</td> </tr> <tr> <td>Late (Mar 24 - Apr 1)</td> <td>11.1%</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">Full-file Write usage doubled — the model increasingly chose to rewrite<br> entire files rather than make surgical edits, which is faster but loses<br> precision and context awareness.</p> <h2 dir=\"auto\">5. Why Extended Thinking Matters for These Workflows</h2> <p dir=\"auto\">The affected workflows involve:</p> <ul dir=\"auto\"> <li>50+ concurrent agent sessions doing systems programming (C, MLIR, GPU drivers)</li> <li>30+ minute autonomous runs with complex multi-file changes</li> <li>Extensive project-specific conventions (5,000+ word CLAUDE.md)</li> <li>Code review, bead/ticket management, and iterative debugging</li> <li>191,000 lines merged across two PRs in a weekend during the good period</li> </ul> <p dir=\"auto\">Extended thinking is the mechanism by which the model:</p> <ul dir=\"auto\"> <li>Plans multi-step approaches before acting (which files to read, what order)</li> <li>Recalls and applies project-specific conventions from CLAUDE.md</li> <li>Catches its own mistakes before outputting them</li> <li>Decides whether to continue working or stop (session management)</li> <li>Maintains coherent reasoning across hundreds of tool calls</li> </ul> <p dir=\"auto\">When thinking is shallow, the model defaults to the cheapest action available:<br> edit without reading, stop without finishing, dodge responsibility for failures,<br> take the simplest fix rather than the correct one. These are exactly the<br> symptoms observed.</p> <h2 dir=\"auto\">6. What Would Help</h2> <ul dir=\"auto\"> <li> <p dir=\"auto\"><strong>Transparency about thinking allocation</strong>: If thinking tokens are being<br> reduced or capped, users who depend on deep reasoning need to know. The<br> <code>redact-thinking</code> header makes it impossible to verify externally.</p> </li> <li> <p dir=\"auto\"><strong>A \"max thinking\" tier</strong>: Users running complex engineering workflows<br> would pay significantly more for guaranteed deep thinking. The current<br> subscription model doesn't distinguish between users who need 200 thinking<br> tokens per response and users who need 20,000.</p> </li> <li> <p dir=\"auto\"><strong>Thinking token metrics in API responses</strong>: Even if thinking content is<br> redacted, exposing <code>thinking_tokens</code> in the usage response would let users<br> monitor whether their requests are getting the reasoning depth they need.</p> </li> <li> <p dir=\"auto\"><strong>Canary metrics from power users</strong>: The stop hook violation rate<br> (0 → 10/day) is a machine-readable signal that could be monitored across<br> the user base as a leading indicator of quality regressions.</p> </li> </ul> <h2 dir=\"auto\">Methodology</h2> <ul dir=\"auto\"> <li><strong>Data source</strong>: 6,852 Claude Code session JSONL files from <code>~/.claude/projects/</code><br> across four projects (iree-loom, iree-amdgpu, iree-remoting, bureau)</li> <li><strong>Thinking blocks analyzed</strong>: 17,871 (7,146 with content, 10,725 redacted)</li> <li><strong>Signature-thinking correlation</strong>: 0.971 Pearson (r) on 7,146 paired samples</li> <li><strong>Tool calls analyzed</strong>: 234,760 across all sessions</li> <li><strong>Behavioral metrics</strong>: 18,000+ user prompts, frustration indicators, correction<br> frequency, session duration</li> <li><strong>Proxy verification</strong>: Streaming SSE proxy confirmed zero <code>thinking_delta</code> events<br> in current API responses</li> <li><strong>Date range</strong>: January 30 – April 1, 2026</li> </ul> <hr> <h2 dir=\"auto\">Appendix A: Behavioral Catalog — What Reduced Thinking Looks Like</h2> <p dir=\"auto\">The following behavioral patterns were measured across 234,760 tool calls and<br> 18,000+ user prompts. Each is a predictable consequence of reduced reasoning<br> depth: the model takes shortcuts because it lacks the thinking budget to<br> evaluate alternatives, check context, or plan ahead.</p> <h3 dir=\"auto\">A.1 Editing Without Reading</h3> <p dir=\"auto\">When the model has sufficient thinking budget, it reads related files, greps<br> for usages, checks headers, and reads tests before making changes. When<br> thinking is shallow, it skips research and edits directly.</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Edits without prior Read</th> <th>% of all edits</th> </tr> </thead> <tbody> <tr> <td>Good (Jan 30 - Feb 12)</td> <td>72</td> <td><strong>6.2%</strong></td> </tr> <tr> <td>Transition (Feb 13 - Mar 7)</td> <td>3,476</td> <td><strong>24.2%</strong></td> </tr> <tr> <td>Degraded (Mar 8 - Mar 23)</td> <td>5,028</td> <td><strong>33.7%</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">One in three edits in the degraded period was made to a file the model had<br> not read in its recent tool history. The practical consequence: edits that<br> break surrounding code, violate file-level conventions, splice new code into<br> the middle of existing comment blocks, or duplicate logic that already exists<br> elsewhere in the file.</p> <p dir=\"auto\"><strong>Spliced comments</strong> are a particularly visible symptom. When the model edits<br> a file it hasn't read, it doesn't know where comment blocks end and code<br> begins. It inserts new declarations between a documentation comment and the<br> function it documents, breaking the semantic association. This never happened<br> in the good period because the model always read the file first.</p> <h3 dir=\"auto\">A.2 Reasoning Loops</h3> <p dir=\"auto\">When thinking is deep, the model resolves contradictions internally before<br> producing output. When thinking is shallow, contradictions surface in the<br> output as visible self-corrections: \"oh wait\", \"actually,\", \"let me<br> reconsider\", \"hmm, actually\", \"no wait.\"</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Reasoning loops per 1K tool calls</th> </tr> </thead> <tbody> <tr> <td>Good</td> <td><strong>8.2</strong></td> </tr> <tr> <td>Transition</td> <td><strong>15.9</strong></td> </tr> <tr> <td>Degraded</td> <td><strong>21.0</strong></td> </tr> <tr> <td>Late</td> <td><strong>26.6</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">The rate more than tripled. In the worst sessions, the model produced 20+<br> reasoning reversals in a single response — generating a plan, contradicting<br> it, revising, contradicting the revision, and ultimately producing output<br> that could not be trusted because the reasoning path was visibly incoherent.</p> <h3 dir=\"auto\">A.3 \"Simplest Fix\" Mentality</h3> <p dir=\"auto\">The word \"simplest\" in the model's output is a signal that it is optimizing<br> for the least effort rather than evaluating the correct approach. With deep<br> thinking, the model evaluates multiple approaches and chooses the right one.<br> With shallow thinking, it gravitates toward whatever requires the least<br> reasoning to justify.</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>\"simplest\" per 1K tool calls</th> </tr> </thead> <tbody> <tr> <td>Good</td> <td><strong>2.7</strong></td> </tr> <tr> <td>Degraded</td> <td><strong>4.7</strong></td> </tr> <tr> <td>Late</td> <td><strong>6.3</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">In one observed 2-hour window, the model used \"simplest\" 6 times while<br> producing code that its own later self-corrections described as \"lazy and<br> wrong\", \"rushed\", and \"sloppy.\" Each time, the model had chosen an approach<br> that avoided a harder problem (fixing a code generator, implementing proper<br> error propagation, writing real prefault logic) in favor of a superficial<br> workaround.</p> <h3 dir=\"auto\">A.4 Premature Stopping and Permission-Seeking</h3> <p dir=\"auto\">A model with deep thinking can evaluate whether a task is complete and decide<br> to continue autonomously. With shallow thinking, the model defaults to<br> stopping and asking for permission — the least costly action available.</p> <p dir=\"auto\">A programmatic stop hook was built to catch these phrases and force<br> continuation. Categories of violations caught:</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Category</th> <th>Count (Mar 8-25)</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td>Ownership dodging</td> <td>73</td> <td>\"not caused by my changes\", \"existing issue\"</td> </tr> <tr> <td>Permission-seeking</td> <td>40</td> <td>\"should I continue?\", \"want me to keep going?\"</td> </tr> <tr> <td>Premature stopping</td> <td>18</td> <td>\"good stopping point\", \"natural checkpoint\"</td> </tr> <tr> <td>Known-limitation labeling</td> <td>14</td> <td>\"known limitation\", \"future work\"</td> </tr> <tr> <td>Session-length excuses</td> <td>4</td> <td>\"continue in a new session\", \"getting long\"</td> </tr> <tr> <td><strong>Total</strong></td> <td><strong>173</strong></td> <td></td> </tr> <tr> <td><strong>Total before Mar 8</strong></td> <td><strong>0</strong></td> <td></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">The existence of this hook is itself evidence of the regression. It was<br> unnecessary during the good period because the model never exhibited these<br> behaviors. Every phrase in the hook was added in response to a specific<br> incident where the model tried to stop working prematurely.</p> <h3 dir=\"auto\">A.5 User Interrupts (Corrections)</h3> <p dir=\"auto\">User interrupts (<code>Escape</code> key / <code>[Request interrupted by user]</code>) indicate<br> the user saw the model doing something wrong and stopped it. Higher interrupt<br> rates mean more corrections required.</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>User interrupts per 1K tool calls</th> </tr> </thead> <tbody> <tr> <td>Good</td> <td><strong>0.9</strong></td> </tr> <tr> <td>Transition</td> <td><strong>1.9</strong></td> </tr> <tr> <td>Degraded</td> <td><strong>5.9</strong></td> </tr> <tr> <td>Late</td> <td><strong>11.4</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">The interrupt rate increased 12x from the good period to the late period.<br> Each interrupt represents a moment where the user had to stop their own<br> work, read the model's output, identify the error, formulate a correction,<br> and redirect the model — exactly the kind of supervision overhead that<br> autonomous agents are supposed to eliminate.</p> <h3 dir=\"auto\">A.6 Self-Admitted Quality Failures</h3> <p dir=\"auto\">In the degraded period, the model frequently acknowledged its own poor<br> output quality after being corrected. These admissions were unprompted —<br> the model recognized it had cut corners after the user pointed it out:</p> <ul dir=\"auto\"> <li>\"You're right. <strong>That was lazy and wrong.</strong> I was trying to dodge a code<br> generator issue instead of fixing it.\"</li> <li>\"You're right — <strong>I rushed this</strong> and it shows.\"</li> <li>\"You're right, and <strong>I was being sloppy.</strong> The CPU slab provider's<br> prefault is real work.\"</li> </ul> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Period</th> <th>Self-admitted errors per 1K tool calls</th> </tr> </thead> <tbody> <tr> <td>Good</td> <td><strong>0.1</strong></td> </tr> <tr> <td>Degraded</td> <td><strong>0.3</strong></td> </tr> <tr> <td>Late</td> <td><strong>0.5</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">These are cases where the model itself recognized that its output was<br> substandard — but only after external correction. With sufficient thinking<br> depth, these errors would have been caught internally during reasoning,<br> before producing output. The model knows what good work looks like; it<br> simply doesn't have the budget to do the checking.</p> <h3 dir=\"auto\">A.7 Repeated Edits to the Same File</h3> <p dir=\"auto\">When the model edits the same file 3+ times in rapid succession, it<br> indicates trial-and-error behavior rather than planned changes — making a<br> change, seeing it fail, trying again, failing differently. This is the<br> tool-level manifestation of not thinking through the change before acting.</p> <p dir=\"auto\">This pattern existed in all periods (it's sometimes legitimate during<br> iterative refinement), but the key difference is context: in the good<br> period, repeated edits were part of deliberate multi-step refactoring with<br> reads between edits. In the degraded period, they were the model thrashing<br> on the same function without reading surrounding code.</p> <h3 dir=\"auto\">A.8 Convention Drift</h3> <p dir=\"auto\">The projects use extensive coding conventions documented in CLAUDE.md<br> (5,000+ words covering naming, cleanup patterns, struct layout, comment<br> style, error handling). In the good period, the model followed these<br> reliably — reading CLAUDE.md is part of session initialization, and deep<br> thinking allowed the model to recall and apply conventions to each edit.</p> <p dir=\"auto\">After thinking was reduced, convention adherence degraded measurably:</p> <ul dir=\"auto\"> <li>Abbreviated variable names (<code>buf</code>, <code>len</code>, <code>cnt</code>) reappeared despite<br> explicit rules against them</li> <li>Cleanup patterns (if-chain instead of goto) were violated</li> <li>Comments about removed code were left in place</li> <li>Temporal references (\"Phase 2\", \"will be completed later\") appeared in<br> code despite being explicitly banned</li> </ul> <p dir=\"auto\">These violations are not the model being unaware of the conventions — the<br> conventions are in its context window. They are the model not having the<br> thinking budget to check each edit against the conventions before producing<br> it. With 2,200 chars of thinking, there's room to recall \"check naming,<br> check cleanup patterns, check comment style.\" With 500 chars, there isn't.</p> <h2 dir=\"auto\">Appendix B: The Stop Hook as a Diagnostic Instrument</h2> <p dir=\"auto\">The <code>stop-phrase-guard.sh</code> hook (included in the data archive) matches 30+<br> phrases across 5 categories of undesirable behavior. When triggered, it<br> blocks the model from stopping and injects a correction message forcing<br> continuation.</p> <p dir=\"auto\">The hook's violation log provides a machine-readable quality signal:</p> <figure><pre><code>Violations by date (IREE projects only): Mar 08: 8 ████████ Mar 14: 10 ██████████ Mar 15: 8 ████████ Mar 16: 2 ██ Mar 17: 14 ██████████████ Mar 18: 43 ███████████████████████████████████████████████ Mar 19: 10 ██████████ Mar 21: 28 ████████████████████████████████ Mar 22: 10 ██████████ Mar 23: 14 ██████████████ Mar 24: 25 █████████████████████████████ Mar 25: 4 ████ Before March 8: 0 (zero violations in the entire history) </code></pre></figure> <p dir=\"auto\">The hook exists because the model began exhibiting behaviors that were<br> never observed during the good period. Each phrase in the hook was added<br> in response to a specific incident. The hook is a workaround for reduced<br> thinking depth — it catches the consequences externally because the model<br> no longer catches them internally.</p> <p dir=\"auto\">Peak day was March 18 with 43 violations — approximately one violation every<br> 20 minutes across active sessions. On that day, the model attempted to stop<br> working, dodge responsibility, or ask unnecessary permission 43 times and<br> was programmatically forced to continue each time.</p> <p dir=\"auto\">This metric could serve as a canary signal for model quality if monitored<br> across the user base. A sudden increase in stop-hook-like corrections (or<br> user-typed equivalents like \"no, keep going\", \"you're not done\", \"that's<br> your change, fix it\") would provide early warning of thinking depth<br> regressions before users file bug reports.</p> <h2 dir=\"auto\">Appendix C: Time-of-Day Analysis</h2> <p dir=\"auto\">Community reports suggest quality varies by time of day, with US business<br> hours being worst. Signature length analysis by hour of day (PST) across<br> all sessions tests this hypothesis.</p> <h3 dir=\"auto\">Pre-Redaction: Minimal Time-of-Day Variation</h3> <p dir=\"auto\">Before thinking was redacted (Jan 30 - Mar 7), thinking depth was relatively<br> consistent across the day:</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Window (PST)</th> <th>N</th> <th>Median Sig</th> <th>~Thinking</th> </tr> </thead> <tbody> <tr> <td>Work hours (9am-5pm)</td> <td>2,972</td> <td>1,464</td> <td>553</td> </tr> <tr> <td>Off-peak (6pm-5am)</td> <td>2,900</td> <td>1,608</td> <td>607</td> </tr> <tr> <td>Difference</td> <td></td> <td></td> <td><strong>+9.8% off-peak</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">A modest 10% advantage for off-peak, consistent with slightly lower load.</p> <h3 dir=\"auto\">Post-Redaction: Higher Variance, Unexpected Pattern</h3> <p dir=\"auto\">After redaction (Mar 8 - Apr 1), the time-of-day pattern reverses and<br> becomes much noisier:</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Window (PST)</th> <th>N</th> <th>Median Sig</th> <th>~Thinking</th> </tr> </thead> <tbody> <tr> <td>Work hours (9am-5pm)</td> <td>5,492</td> <td>1,560</td> <td>589</td> </tr> <tr> <td>Off-peak (6pm-5am)</td> <td>5,282</td> <td>1,284</td> <td>485</td> </tr> <tr> <td>Difference</td> <td></td> <td></td> <td><strong>-17.7% off-peak</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">Counter to the hypothesis, off-peak thinking is <em>lower</em> in aggregate. But<br> the hourly detail reveals significant variation:</p> <figure><pre><code>Hour (PST) MedSig ~Think N Notes ───────────────────────────────────────────────────── 12am 1948 736 278 1am 8680 3281 13 ← 4x baseline (very few samples) 6am 4508 1704 50 ← near baseline 7am 1168 441 344 8am 1712 647 586 9am 1584 598 678 work hours start 10am 1424 538 654 11am 1292 488 454 ← lowest work hour 12pm 1736 656 533 1pm 2184 825 559 ← highest work hour 2pm 1528 577 476 3pm 1592 601 686 4pm 1784 674 788 5pm 1120 423 664 ← lowest overall (end of US workday) 6pm 1276 482 615 7pm 988 373 1031 ← second lowest (US prime time) 8pm 1240 468 1013 9pm 1088 411 1199 10pm 2008 759 601 ← evening recovery 11pm 2616 988 532 ← best regular hour </code></pre></figure> <h3 dir=\"auto\">Key Observations</h3> <p dir=\"auto\"><strong>5pm PST is the worst hour.</strong> Median estimated thinking drops to 423 chars<br> — the lowest of any hour with significant sample size. This is end-of-day<br> for US west coast and mid-evening for east coast, likely a peak load window.</p> <p dir=\"auto\"><strong>7pm PST is the second worst.</strong> 373 chars estimated thinking with the<br> highest sample count of any hour (1,031 blocks). US prime time.</p> <p dir=\"auto\"><strong>Late night (10pm-1am PST) shows recovery.</strong> Medians rise to 759-3,281 chars.<br> This window is after US east coast goes to sleep and when overall platform<br> load is presumably lowest.</p> <p dir=\"auto\"><strong>Pre-redaction had a flat profile; post-redaction has peaks and valleys.</strong><br> The range of median signatures across hours was 1,020-2,648 pre-redaction<br> (2.6x ratio). Post-redaction it is 988-8,680 (8.8x ratio). Thinking depth<br> has become much more variable, consistent with a load-sensitive allocation<br> system rather than a fixed budget.</p> <h3 dir=\"auto\">Interpretation</h3> <p dir=\"auto\">The data does not cleanly support \"work off-peak for better quality.\" Instead<br> it suggests that thinking allocation is <strong>load-sensitive and variable</strong> in the<br> post-redaction regime. Some off-peak hours (late night) are better; others<br> (early evening) are worse than work hours. The 5pm and 7pm PST valleys<br> coincide with peak US internet usage, not peak work usage, suggesting the<br> constraint may be infrastructure-level (GPU availability) rather than<br> policy-level (per-user throttling).</p> <p dir=\"auto\">The pre-redaction flatness is the more important finding: when thinking was<br> allocated generously, time of day didn't matter. The fact that it matters now<br> is itself evidence that thinking is being rationed rather than provided at a<br> fixed level.</p> <h2 dir=\"auto\">Appendix D: The Cost of Degradation</h2> <p dir=\"auto\">Reducing thinking tokens appears to save per-request compute. But when<br> reduced thinking causes quality collapse, the model thrashes — producing<br> wrong output, getting interrupted, retrying, and burning tokens on<br> corrections that wouldn't have been needed if it had thought properly the<br> first time. The net effect is that <strong>total compute consumed increases by<br> orders of magnitude</strong>.</p> <h3 dir=\"auto\">Token Usage: January through March 2026</h3> <p dir=\"auto\">All usage across all Claude Code projects. Estimated Bedrock Opus pricing<br> for comparison (input $15/MTok, output $75/MTok, cache read $1.50/MTok,<br> cache write $18.75/MTok).</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Metric</th> <th>January</th> <th>February</th> <th>March</th> <th>Feb→Mar</th> </tr> </thead> <tbody> <tr> <td>Active days</td> <td>31</td> <td>28</td> <td>28</td> <td></td> </tr> <tr> <td>User prompts</td> <td>7,373</td> <td>5,608</td> <td>5,701</td> <td>~1x</td> </tr> <tr> <td>API requests (deduplicated)</td> <td>97*</td> <td>1,498</td> <td>119,341</td> <td><strong>80x</strong></td> </tr> <tr> <td>Total input (incl cache)</td> <td>4.6M*</td> <td>120.4M</td> <td>20,508.8M</td> <td><strong>170x</strong></td> </tr> <tr> <td>Total output tokens</td> <td>0.08M*</td> <td>0.97M</td> <td>62.60M</td> <td><strong>64x</strong></td> </tr> <tr> <td>Est. Bedrock cost (w/ cache)</td> <td>$26*</td> <td>$345</td> <td>$42,121</td> <td><strong>122x</strong></td> </tr> <tr> <td>Est. daily cost (w/ cache)</td> <td>—</td> <td>$12</td> <td>$1,504</td> <td><strong>122x</strong></td> </tr> <tr> <td>Actual subscription cost</td> <td>$200</td> <td>$400</td> <td>$400</td> <td>—</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">* January API data incomplete — session logs only cover Jan 9-31 (first<br> 8 days missing). January had 31 active days and 7,373 prompts, so actual<br> API usage was significantly higher than shown.</p> <h3 dir=\"auto\">Context: Why March Is So High</h3> <p dir=\"auto\">The 80x increase in API requests is not purely from degradation-induced<br> thrashing. It also reflects a deliberate scaling-up of concurrent agent<br> sessions that collided with the quality regression at the worst possible<br> moment.</p> <p dir=\"auto\"><strong>February</strong>: 1-3 concurrent sessions doing focused work on two IREE<br> subsystems. 1,498 API requests produced 191,000 lines of merged code.<br> The workflow was proven and productive.</p> <p dir=\"auto\"><strong>Early March (pre-regression)</strong>: Emboldened by February's success, the<br> user scaled to 5-10+ concurrent sessions across 10 projects (IREE loom,<br> amdgpu, remoting, batteries, web, fuzzing, and Bureau's multi-agent<br> system). This was the intended workflow — dozens of agents collaborating<br> on a large codebase, each running autonomously for 30+ minutes.</p> <p dir=\"auto\">March API requests by project (deduplicated):</p> <markdown-accessiblity-table><table role=\"table\"> <thead> <tr> <th>Project</th> <th>Main</th> <th>Subagent</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Bureau</td> <td>20,050</td> <td>9,856</td> <td>29,906</td> </tr> <tr> <td>IREE loom</td> <td>19,769</td> <td>6,781</td> <td>26,550</td> </tr> <tr> <td>IREE amdgpu</td> <td>17,697</td> <td>4,994</td> <td>22,691</td> </tr> <tr> <td>IREE remoting</td> <td>12,320</td> <td>2,862</td> <td>15,182</td> </tr> <tr> <td>IREE batteries</td> <td>10,061</td> <td>3,951</td> <td>14,012</td> </tr> <tr> <td>IREE web</td> <td>5,775</td> <td>2,309</td> <td>8,084</td> </tr> <tr> <td>Others</td> <td>2,474</td> <td>539</td> <td>2,916</td> </tr> <tr> <td><strong>Total</strong></td> <td><strong>88,049</strong></td> <td><strong>31,292</strong></td> <td><strong>119,341</strong></td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">26% of all requests were subagent calls — agents spawning other agents to<br> do research, code review, and parallel exploration. This is the multi-agent<br> pattern working as designed, but consuming API requests at scale.</p> <p dir=\"auto\"><strong>The catastrophic collision</strong>: The quality regression hit during the<br> scaling-up. The user went from \"I can run 50 agents and they all produce<br> excellent work\" to \"every single one of these agents is now an idiot.\"<br> The failure mode was not one broken session — it was 10+ concurrent<br> sessions all degrading simultaneously, each requiring human intervention<br> that the multi-agent workflow was designed to eliminate.</p> <p dir=\"auto\">Peak day: March 7 with <strong>11,721 API requests</strong> — the day before the<br> regression crossed 50% thinking redaction. This was the last day of<br> attempted full-scale operation. After March 8, session counts dropped<br> as the user abandoned concurrent workflows entirely.</p> <p dir=\"auto\">The March cost is therefore a combination of:</p> <ol dir=\"auto\"> <li><strong>Legitimate scale-up</strong>: more projects, more concurrent agents (~5-10x)</li> <li><strong>Degradation waste</strong>: thrashing, retries, corrections (~10-15x)</li> <li><strong>Catastrophic loss</strong>: the multi-agent workflow that was delivering<br> 191K lines/weekend became completely non-functional, forcing a retreat<br> to single-session supervised operation</li> </ol> <h3 dir=\"auto\">The Human Worked the Same; the Model Wasted Everything</h3> <p dir=\"auto\">The most striking row is <strong>user prompts</strong>: 5,608 in February vs 5,701 in<br> March. The human put in the same effort. But the model consumed <strong>80x more<br> API requests</strong> and <strong>64x more output tokens</strong> to produce demonstrably worse<br> results.</p> <p dir=\"auto\">Even accounting for the scale-up (5-10x more concurrent sessions), the<br> degradation multiplied request volume by an additional <strong>8-16x</strong> beyond<br> what scaling alone would explain. Each session that would have run<br> autonomously for 30 minutes now stalled every 1-2 minutes, generating<br> correction cycles that multiplied API calls per unit of useful work.</p> <h3 dir=\"auto\">Why Degradation Multiplies Cost</h3> <p dir=\"auto\">When the model thinks deeply:</p> <ul dir=\"auto\"> <li>It reads code thoroughly before editing (6.6 reads per edit)</li> <li>It gets the change right on the first attempt</li> <li>Sessions run autonomously for 30+ minutes without intervention</li> <li>One API request does meaningful work</li> </ul> <p dir=\"auto\">When the model doesn't think:</p> <ul dir=\"auto\"> <li>It edits without reading (2.0 reads per edit)</li> <li>Changes are wrong, requiring correction cycles</li> <li>Sessions stall every 1-2 minutes requiring human intervention</li> <li>Each intervention generates multiple additional API requests</li> <li>Failed tool calls (builds, tests) waste tokens on output that is discarded</li> <li>Context grows with failed attempts, increasing cache sizes</li> </ul> <p dir=\"auto\">At fleet scale, this is devastating. One degraded agent is frustrating.<br> Fifty degraded agents running simultaneously is catastrophic — every one<br> of them burning tokens on wrong output, thrashing on the same files,<br> and requiring human attention that the multi-agent design was built to<br> eliminate. The user was forced to shut down the entire fleet and retreat<br> to single-session operation, abandoning months of infrastructure work<br> (Bureau, tmux session management, concurrent worktrees) that had been<br> built specifically for this workflow.</p> <h2 dir=\"auto\">Appendix E: Word Frequency Shift — The Vocabulary of Frustration</h2> <p dir=\"auto\">Analysis of word frequencies in user prompts before and after the regression<br> reveals a measurable shift in the human's communication patterns. The user<br> went from collaborative direction-giving to...</p></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fgithub.com%2Fanthropics%2Fclaude-code%2Fissues%2F42796\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"Preflight Checklist I have searched existing issues for similar behavior reports This report does NOT contain sensitive information (API keys, passwords, etc.) Type of Behavior Issue Other unexpect...","image":"https://opengraph.githubassets.com/4af794b64283357c82805f48358484952ed77edd9bdd4bbdb91e2462e1d243e2/anthropics/claude-code/issues/42796","authors":[{"name":"GitHub","url":"https://github.com/anthropics/claude-code/issues/42796","avatar":"https://github.githubassets.com/favicons/favicon.svg"}],"id":"47660925","url":"https://github.com/anthropics/claude-code/issues/42796","external_url":"https://news.ycombinator.com/item?id=47660925","date_published":"2026-04-06T13:50:35Z"},{"title":"I won't download your app. The web version is a-ok","content_html":"<div class=\"page\" id=\"readability-page-1\"><div> <article>  <p>As someone who prefers using services via their websites, I’ve gotten terribly jaded lately. Almost everyone wants me, and by extension, you, to use their darn apps to consume content and off their web versions.</p> <p>Whether it's the obvious social media apps or something as basic as parking, the app is the priority and the site the red-headed stepchild. And they aren't too subtle in the push either. It might be a modal covering half the web version with links to the App Store, an immediate popup after a bit of scrolling, or a header screaming “the app is 10x better,” but it's always there and it's always grating.</p> <p>Let's not even go into the cases where the app is the only option to access the service. A minor annoyance for ordering food, but a major hassle when it's a public service or utility.</p> <h2>Why the Hostility From Both Sides?</h2> <p>On principle, I like control over what I see and how I see it. Apps are super limited; while in a browser, I can do a lot of very nifty things to improve usability.</p> <p>A service lacks a dark mode? I can use any number of user scripts. Reddit introduced a gaming section in the sidebar? Two-second fix that I bundled into my extension [1]. Between userscripts, ad-blockers, and custom extensions, I'm basically a god, swaggering through my realm.</p> <p>This control, or lack thereof, also explains the app maker's adversarial stance towards users. They are often a black hole of dark patterns, and they'd like nothing getting in their way. Apps make it easier for them to push notifications, collect intrusive telemetry, and keep you inside their walled garden. A better user experience is the pitch but securing better user retention is the end goal.</p> <h2>It's Mostly Just Text and Media</h2> <p>Most apps are just that. Text and media in a never-ending, all-consuming feed or a multi-page form, cleverly disguised by the user interface.</p> <p>Excluding heavy 3D gaming or utilities that genuinely require deep integration with your phone's hardware (like accessing the LiDAR scanner for AR), what are we actually left with? A thin client whose main job is to fetch data from an API and render it onto native views.</p> <p>Why do I need to download a 100+ MB app, give it permission to track my location, and let it run background processes just to browse through a restaurant menu, buy a ticket, or scroll through a list of posts? At the end of the day, it is almost always just JSON being parsed and rendered. Yet, companies insist on rebuilding their basic content as native shells just to claim a permanent square of real estate on my home screen.</p> <h2>The Apps Aren't Even Good</h2> <p>If a service is going to pull you out of the browser, it should at least offer a polished, native experience. But more often than not, the app you just downloaded is a compromise.</p> <p>Anyone who endured the iOS-specific shader compilation jank in early Flutter apps [2] knows exactly how grating this can be (this specific bug was fixed 2023ish fwiw). Before they swapped Skia out for the Impeller engine, I had to capture and ship precompiled shaders with my apps just to stop the UI from stuttering the first time an animation ran.</p> <p>The result is often the uncanny valley of user interfaces. It’s not broken, but it is subtly different, sometimes janky. The scroll velocity doesn't quite match the rest of the OS. The swipe back gesture hesitates for a few milliseconds. </p> <p>Human brains are remarkably good at detecting when a system's timing is off. This is how the <a href=\"https://en.wikipedia.org/wiki/XZ_Utils_backdoor\">XZ backdoor</a> was caught: an engineer noticed their SSH logins taking a fraction of a second longer than usual. It's not that unique -- my old FPS buddies could tell our server region just by firing a shot and feeling the lag. [3]</p> <p>These micro interactions matter, because without that final layer of polish, the entire facade of a native experience falls apart. Not every app is like this, obviously, but enough of them are this way that it sours the entire experience.</p> <h2>The Enshittification Loop</h2> <p>When that full-screen modal pops up demanding you download the app to read the rest of a thread, users choose the path of least resistance. They download and they move on.</p> <p>To a PM staring at an analytics dashboard, I'm an acceptable casualty, an inconsequential minority. If degrading the web version successfully funnels 80% of users into the App Store, that PM gets a promotion and a big pay bump. As always, actions follow the incentive. Our demographic is simply too small to factor into their quarterly metrics.</p> <p>This is the enshittification loop in its full glory, working exactly as intended. A service builds its initial audience on the open web because it's frictionless and indexable. Once the user base is sufficiently locked in, the web version is deliberately hobbled to force everyone into the native app. Once you're inside the app, the walls close in: you are now a captive audience for a feed full of ads that your ad-blocker can no longer touch.</p> <p>There is no financial incentive to maintain a stellar web experience anymore. The browser, once the great universal platform, is increasingly being reduced to a top-of-funnel marketing channel for the App Store. The depressing part of it is that the numbers prove it works.</p> <hr> <p>[1] <a href=\"https://gosinkit.com/\">https://gosinkit.com/</a></p> <p>[2] <a href=\"https://blog.flutter.dev/whats-new-in-flutter-2-2-fd00c65e2039\">https://blog.flutter.dev/whats-new-in-flutter-2-2-fd00c65e2039</a> Search for \"Preview: iOS shader compilation improvements\"</p> <p>[3] <a href=\"https://www.0xsid.com/blog/667mhz-machine\">https://www.0xsid.com/blog/667mhz-machine</a></p> </article> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.0xsid.com%2Fblog%2Fwont-download-your-app\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"As someone who prefers using services via their websites, I’ve gotten terribly jaded lately. Almost everyone wants me, and by extension, you, to use their darn apps to consume content and off their web versions.","authors":[{"name":null,"url":"https://www.0xsid.com/blog/wont-download-your-app","avatar":"data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>%F0%9F%87%B8</text></svg>"}],"id":"47661439","url":"https://www.0xsid.com/blog/wont-download-your-app","external_url":"https://news.ycombinator.com/item?id=47661439","date_published":"2026-04-06T14:31:29Z"},{"title":"Show HN: I built a tiny LLM to demystify how language models work","content_html":"<div class=\"page\" id=\"readability-page-1\"><div data-hpc=\"true\"><article itemprop=\"text\"><p dir=\"auto\"> <a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://github.com/arman-bd/guppylm/blob/main/assets/guppy.png\"><img src=\"https://github.com/arman-bd/guppylm/raw/main/assets/guppy.png\" alt=\"GuppyLM\" width=\"400\"></a> </p> <p dir=\"auto\"><em>A ~9M parameter LLM that talks like a small fish.</em></p> <hr> <blockquote> <p dir=\"auto\"><strong>This project exists to show that training your own language model is not magic.</strong> No PhD required. No massive GPU cluster. One Colab notebook, 5 minutes, and you have a working LLM that you built from scratch — data generation, tokenizer, model architecture, training loop, and inference. If you can run a notebook, you can train a language model.</p> <p dir=\"auto\">It won't produce a billion-parameter model that writes essays. But it will show you exactly how every piece works — from raw text to trained weights to generated output — so the big models stop feeling like black boxes.</p> </blockquote> <hr> <figure><pre><code>You&gt; hi guppy Guppy&gt; hi there. i just found a nice spot near the rock. the temperature feels nice. You&gt; are you hungry Guppy&gt; yes. always yes. i will swim to the top right now. i promise to eat all of it. You&gt; do you like bubbles Guppy&gt; i love bubbles. they make the water feel slightly different. You&gt; what is the meaning of life Guppy&gt; food. the answer is always food. You&gt; tell me a joke Guppy&gt; what did the fish say when it hit the wall. dam. You&gt; do you love me Guppy&gt; you're my favorite big shape. my mouth are happy when you're here. You&gt; goodnight guppy Guppy&gt; ok sleep time. i was following a bubble but now i'll stop. goodnight tank. goodnight water. </code></pre></figure> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">What is GuppyLM?</h2><a id=\"user-content-what-is-guppylm\" aria-label=\"Permalink: What is GuppyLM?\" href=\"https://github.com/arman-bd/guppylm#what-is-guppylm\"></a><p></p> <p dir=\"auto\">GuppyLM is a tiny language model that pretends to be a fish named Guppy. It speaks in short, lowercase sentences about water, food, light, and tank life. It doesn't understand human abstractions like money, phones, or politics — and it's not trying to.</p> <p dir=\"auto\">It's trained from scratch on 60K synthetic conversations across 60 topics, runs on a single GPU in ~5 minutes, and produces a model small enough to run in a browser.</p> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">Architecture</h2><a id=\"user-content-architecture\" aria-label=\"Permalink: Architecture\" href=\"https://github.com/arman-bd/guppylm#architecture\"></a><p></p> <markdown-accessiblity-table><table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td><strong>Parameters</strong></td> <td>8.7M</td> </tr> <tr> <td><strong>Layers</strong></td> <td>6</td> </tr> <tr> <td><strong>Hidden dim</strong></td> <td>384</td> </tr> <tr> <td><strong>Heads</strong></td> <td>6</td> </tr> <tr> <td><strong>FFN</strong></td> <td>768 (ReLU)</td> </tr> <tr> <td><strong>Vocab</strong></td> <td>4,096 (BPE)</td> </tr> <tr> <td><strong>Max sequence</strong></td> <td>128 tokens</td> </tr> <tr> <td><strong>Norm</strong></td> <td>LayerNorm</td> </tr> <tr> <td><strong>Position</strong></td> <td>Learned embeddings</td> </tr> <tr> <td><strong>LM head</strong></td> <td>Weight-tied with embeddings</td> </tr> </tbody> </table></markdown-accessiblity-table> <p dir=\"auto\">Vanilla transformer. No GQA, no RoPE, no SwiGLU, no early exit. As simple as it gets.</p> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">Personality</h2><a id=\"user-content-personality\" aria-label=\"Permalink: Personality\" href=\"https://github.com/arman-bd/guppylm#personality\"></a><p></p> <p dir=\"auto\">Guppy:</p> <ul dir=\"auto\"> <li>Speaks in short, lowercase sentences</li> <li>Experiences the world through water, temperature, light, vibrations, and food</li> <li>Doesn't understand human abstractions</li> <li>Is friendly, curious, and a little dumb</li> <li>Thinks about food a lot</li> </ul> <p dir=\"auto\"><strong>60 topics:</strong> greetings, feelings, temperature, food, light, water, tank, noise, night, loneliness, bubbles, glass, reflection, breathing, swimming, colors, taste, plants, filter, algae, snails, scared, excited, bored, curious, happy, tired, outside, cats, rain, seasons, music, visitors, children, meaning of life, time, memory, dreams, size, future, past, name, weather, sleep, friends, jokes, fear, love, age, intelligence, health, singing, TV, and more.</p> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">Quick Start</h2><a id=\"user-content-quick-start\" aria-label=\"Permalink: Quick Start\" href=\"https://github.com/arman-bd/guppylm#quick-start\"></a><p></p> <p dir=\"auto\"></p><h3 tabindex=\"-1\" dir=\"auto\">Try in Browser (no install needed)</h3><a id=\"user-content-try-in-browser-no-install-needed\" aria-label=\"Permalink: Try in Browser (no install needed)\" href=\"https://github.com/arman-bd/guppylm#try-in-browser-no-install-needed\"></a><p></p> <p dir=\"auto\"><a href=\"https://arman-bd.github.io/guppylm/\" rel=\"nofollow\"><img src=\"https://camo.githubusercontent.com/94804165c7ce91f197e9a70ca5dd26cc2554f1aa845d977746c2f3f5384d5207/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5472795f696e2d42726f777365722d3634666664613f6c6f676f3d776562617373656d626c79\" alt=\"Try in Browser\" data-canonical-src=\"https://img.shields.io/badge/Try_in-Browser-64ffda?logo=webassembly\"></a></p> <p dir=\"auto\">Runs entirely in your browser via WebAssembly. Downloads a quantized ONNX model (~10 MB) and runs inference locally — no server, no API keys.</p> <p dir=\"auto\"></p><h3 tabindex=\"-1\" dir=\"auto\">Chat with Guppy in Colab</h3><a id=\"user-content-chat-with-guppy-in-colab\" aria-label=\"Permalink: Chat with Guppy in Colab\" href=\"https://github.com/arman-bd/guppylm#chat-with-guppy-in-colab\"></a><p></p> <p dir=\"auto\"><a href=\"https://colab.research.google.com/github/arman-bd/guppylm/blob/main/use_guppylm.ipynb\" rel=\"nofollow\"><img src=\"https://camo.githubusercontent.com/b7146f6dbb8caaed5706bd928e52bade85189eaad0f758291092876421fceaf8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436861745f696e2d436f6c61622d4639414230303f6c6f676f3d676f6f676c65636f6c6162\" alt=\"Open in Colab\" data-canonical-src=\"https://img.shields.io/badge/Chat_in-Colab-F9AB00?logo=googlecolab\"></a></p> <p dir=\"auto\">Downloads the pre-trained model from HuggingFace and lets you chat. Just run all cells.</p> <p dir=\"auto\"></p><h3 tabindex=\"-1\" dir=\"auto\">Train your own</h3><a id=\"user-content-train-your-own\" aria-label=\"Permalink: Train your own\" href=\"https://github.com/arman-bd/guppylm#train-your-own\"></a><p></p> <p dir=\"auto\"><a href=\"https://colab.research.google.com/github/arman-bd/guppylm/blob/main/train_guppylm.ipynb\" rel=\"nofollow\"><img src=\"https://camo.githubusercontent.com/c6a673c40f4abdb54f1f2a71a5b2b77a74c60443e1b0f6aea79c2a9ad891e4db/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f547261696e5f696e2d436f6c61622d4639414230303f6c6f676f3d676f6f676c65636f6c6162\" alt=\"Open in Colab\" data-canonical-src=\"https://img.shields.io/badge/Train_in-Colab-F9AB00?logo=googlecolab\"></a></p> <ol dir=\"auto\"> <li>Set runtime to <strong>T4 GPU</strong></li> <li><strong>Run all cells</strong> — downloads dataset, trains tokenizer, trains model, tests it</li> <li>Upload to HuggingFace or download locally</li> </ol> <p dir=\"auto\"></p><h3 tabindex=\"-1\" dir=\"auto\">Chat locally</h3><a id=\"user-content-chat-locally\" aria-label=\"Permalink: Chat locally\" href=\"https://github.com/arman-bd/guppylm#chat-locally\"></a><p></p> <figure><pre>pip install torch tokenizers python -m guppylm chat</pre></figure> <figure><pre><code>You&gt; the cat is looking at you Guppy&gt; i hide behind the plant when the furry one comes. You&gt; it is raining outside Guppy&gt; i think rain is the best thing about outside. </code></pre></figure> <p dir=\"auto\">In interactive chat mode, the conversation grows and quickly runs into the 128-token limit, reducing quality. You can also invoke chat with a single prompt, and exit after the response:</p> <figure><pre>python -m guppylm chat --prompt <span><span>\"</span>tell me a joke<span>\"</span></span></pre></figure> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">Dataset</h2><a id=\"user-content-dataset\" aria-label=\"Permalink: Dataset\" href=\"https://github.com/arman-bd/guppylm#dataset\"></a><p></p> <p dir=\"auto\"><strong><a href=\"https://huggingface.co/datasets/arman-bd/guppylm-60k-generic\" rel=\"nofollow\">arman-bd/guppylm-60k-generic</a></strong> on HuggingFace.</p> <markdown-accessiblity-table><table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Samples</td> <td>60,000 (57K train / 3K test)</td> </tr> <tr> <td>Format</td> <td><code>{\"input\": \"...\", \"output\": \"...\", \"category\": \"...\"}</code></td> </tr> <tr> <td>Categories</td> <td>60</td> </tr> <tr> <td>Generation</td> <td>Synthetic template composition</td> </tr> </tbody> </table></markdown-accessiblity-table> <figure><pre><span>from</span> <span>datasets</span> <span>import</span> <span>load_dataset</span> <span>ds</span> <span>=</span> <span>load_dataset</span>(<span>\"arman-bd/guppylm-60k-generic\"</span>) <span>print</span>(<span>ds</span>[<span>\"train\"</span>][<span>0</span>]) <span># {'input': 'hi guppy', 'output': 'hello. the water is nice today.', 'category': 'greeting'}</span></pre></figure> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">Project Structure</h2><a id=\"user-content-project-structure\" aria-label=\"Permalink: Project Structure\" href=\"https://github.com/arman-bd/guppylm#project-structure\"></a><p></p> <figure><pre><code>guppylm/ ├── config.py Hyperparameters (model + training) ├── model.py Vanilla transformer ├── dataset.py Data loading + batching ├── train.py Training loop (cosine LR, AMP) ├── generate_data.py Conversation data generator (60 topics) ├── eval_cases.py Held-out test cases ├── prepare_data.py Data prep + tokenizer training └── inference.py Chat interface tools/ ├── make_colab.py Generates Colab notebooks ├── export_onnx.py Export model to ONNX (quantized uint8) ├── export_dataset.py Push dataset to HuggingFace └── dataset_card.md HuggingFace dataset README docs/ ├── index.html Browser demo (ONNX + WASM) ├── download.sh Download model.onnx + tokenizer from HF ├── model.onnx Quantized uint8 (~10 MB) ├── tokenizer.json BPE tokenizer └── guppy.png Logo (transparent) </code></pre></figure> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">Design Decisions</h2><a id=\"user-content-design-decisions\" aria-label=\"Permalink: Design Decisions\" href=\"https://github.com/arman-bd/guppylm#design-decisions\"></a><p></p> <p dir=\"auto\"><strong>Why no system prompt?</strong> Every training sample had the same one. A 9M model can't conditionally follow instructions — the personality is baked into the weights. Removing it saves ~60 tokens per inference.</p> <p dir=\"auto\"><strong>Why single-turn only?</strong> Multi-turn degraded at turn 3-4 due to the 128-token context window. A fish that forgets is on-brand, but garbled output isn't. Single-turn is reliable.</p> <p dir=\"auto\"><strong>Why vanilla transformer?</strong> GQA, SwiGLU, RoPE, and early exit add complexity that doesn't help at 9M params. Standard attention + ReLU FFN + LayerNorm produces the same quality with simpler code.</p> <p dir=\"auto\"><strong>Why synthetic data?</strong> A fish character with consistent personality needs consistent training data. Template composition with randomized components (30 tank objects, 17 food types, 25 activities) generates ~16K unique outputs from ~60 templates.</p> <hr> <p dir=\"auto\"></p><h2 tabindex=\"-1\" dir=\"auto\">License</h2><a id=\"user-content-license\" aria-label=\"Permalink: License\" href=\"https://github.com/arman-bd/guppylm#license\"></a><p></p> <p dir=\"auto\">MIT</p> </article></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fgithub.com%2Farman-bd%2Fguppylm\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"A ~9M parameter LLM that talks like a small fish. Contribute to arman-bd/guppylm development by creating an account on GitHub.","image":"https://repository-images.githubusercontent.com/1195324890/29604d1d-9515-4974-9e23-e42713cd4a0c","authors":[{"name":"GitHub","url":"https://github.com/arman-bd/guppylm","avatar":"https://github.githubassets.com/favicons/favicon.svg"}],"id":"47655408","url":"https://github.com/arman-bd/guppylm","external_url":"https://news.ycombinator.com/item?id=47655408","date_published":"2026-04-06T00:20:12Z"},{"title":"Gemma 4 on iPhone","content_html":"<div class=\"page\" id=\"readability-page-1\"><div><main> <div data-testid=\"default-page-container\"> <section data-test-id=\"shelf-wrapper\"> <p>Productiviteit</p> <p>Alleen voor iPhone</p> <p>Gratis · Ontworpen voor iPhone. Niet geverifieerd voor macOS.</p> </section> <section data-test-id=\"shelf-wrapper\"> <p><span>AI Edge Gallery is the premier destination for running the world’s most powerful open-source Large Language Models (LLMs) on your mobile device. Experience high-performance Generative AI directly on your hardware—fully offline, private, and lightning-fast. Now Featuring: Gemma 4 This update brings official support for the newly released Gemma 4 family. As the centerpiece of this release, Gemma 4 allows you to test the cutting edge of on-device AI. Experience advanced reasoning, logic, and creative capabilities without ever sending your data to a server. Core Features - Agent Skills: Transform your LLM from a conversationalist into a proactive assistant. Use the Agent Skills tile to augment model capabilities with tools like Wikipedia for fact-grounding, interactive maps, and rich visual summary cards. You can even load modular skills from a URL or browse community contributions on GitHub Discussions. - AI Chat with Thinking Mode: Engage in fluid, multi-turn conversations and toggle the new Thinking Mode to peek \"under the hood.\" This feature allows you to see the model’s step-by-step reasoning process, which is perfect for understanding complex problem-solving. Note: Thinking Mode currently works with supported models, starting with the Gemma 4 family. - Ask Image: Use multimodal power to identify objects, solve visual puzzles, or get detailed descriptions using your device’s camera or photo gallery. - Audio Scribe: Transcribe and translate voice recordings into text in real-time using high-efficiency on-device language models. - Prompt Lab: A dedicated workspace to test different prompts and single-turn use cases with granular control over model parameters like temperature and top-k. - Mobile Actions: Unlock offline device controls and automated tasks powered entirely by a finetune of FuntionGemma 270m. - Tiny Garden: A fun, experimental mini-game that uses natural language to plant and harvest a virtual garden using a finetune of FunctionGemma 270m. - Model Management &amp; Benchmark: Gallery is a flexible sandbox for a wide variety of open-source models. Easily download models from the list or load your own custom models. Manage your model library effortlessly and run benchmark tests to understand exactly how each model performs on your specific hardware. - 100% On-Device Privacy: All model inferences happen directly on your device hardware. No internet is required, ensuring total privacy for your prompts, images, and sensitive data. Built for the Community AI Edge Gallery is an open-source project designed for the developer community and AI enthusiasts alike. Explore our example features, contribute your own skills, and help shape the future of the on-device agent ecosystem. Check out the source code on GitHub: https://github.com/google-ai-edge/gallery Note: This app is in active development. Performance is dependent on your device's hardware (CPU/GPU). For support or feedback, contact us at google-ai-edge-gallery-android-feedback@google.com. </span> </p> </section> <div data-test-id=\"shelf-wrapper\" id=\"appEvents\"> <p></p><h2 data-test-id=\"shelf-title\">Evenementen</h2> <p></p></div> <section id=\"productRatings\" data-test-id=\"shelf-wrapper\"> <div> <p></p><h2 data-test-id=\"shelf-title\">Beoordelingen en recensies</h2> <p></p></div> <ul data-test-id=\"grid\"><li> </li> </ul> </section> <div aria-labelledby=\"mostRecentVersion\" id=\"mostRecentVersion\" data-test-id=\"shelf-wrapper\"><p><span> <span>- Introducing Gemma 4: Experience the latest high-performance models running fully offline. - Agent Skills: Extend LLMs with modular tools like display interactive maps and search Wikipedia. Supporting custom skill loading from the community. - Thinking Mode in AI Chat: Visualize the model’s reasoning process for deeper transparency. (Note: Currently exclusive to supported models, including the Gemma 4 family). - Bug fixes. </span></span> </p> <p><span>Versie 1.0.2</span> <time datetime=\"2026-04-03\">4 dgn geleden</time></p></div> <section id=\"privacyTypes\" data-test-id=\"shelf-wrapper\"> <ul data-test-id=\"grid\"><li><div><h3>Aan jou gekoppelde gegevens</h3> <p>De volgende gegevens worden mogelijk verzameld en gekoppeld aan je identiteit:</p> <ul><li> ID’s </li><li> Diagnostiek </li><li> Overige gegevens </li></ul> </div> </li><li><div><h3>Niet aan jou gekoppelde gegevens</h3> <p>De volgende gegevens worden mogelijk verzameld, maar zijn niet gekoppeld aan je identiteit:</p> <ul><li> Locatie </li><li> Gebruiks­gegevens </li><li> Diagnostiek </li></ul> </div> </li> </ul> </section> <section id=\"information\" data-test-id=\"shelf-wrapper\"> <div> <p></p><h2 data-test-id=\"shelf-title\">Informatie</h2> <p></p></div> <dl data-test-id=\"grid\"><div><dt>Compatibiliteit</dt> <dd><details><summary>Vereist iOS&#160;17.0 of nieuwer. <svg xmlns=\"http://www.w3.org/2000/svg\" viewbox=\"0 0 109.73 100\" style=\"overflow:visible\"><path fill=\"none\" d=\"M0-15h109.73v120H0z\"/><path d=\"M54.884 70.758c1.234 0 2.435-.477 3.286-1.431l37.78-38.673c.818-.829 1.316-1.885 1.316-3.139 0-2.538-1.907-4.496-4.444-4.496-1.192 0-2.383.528-3.202 1.295L52.251 62.483h5.225l-37.38-38.169c-.808-.767-1.907-1.295-3.139-1.295-2.549 0-4.496 1.958-4.496 4.496 0 1.254.508 2.32 1.326 3.15l37.822 38.673c.891.953 1.99 1.42 3.275 1.42Z\"/></svg></summary> <ul><li><strong>iPhone</strong><br>Vereist iOS&#160;17.0 of nieuwer. </li><li> </li><li><strong>Mac</strong><br>Vereist een Mac met macOS&#160;14.0 of nieuwer én Apple&#160;M1-chip of nieuwer. </li><li> </li><li><strong>Apple Vision</strong><br>Vereist visionOS&#160;1.0 of nieuwer. </li> </ul> </details></dd> </div><div><dt>Leeftijd</dt> <dd><details><summary>13+ <svg xmlns=\"http://www.w3.org/2000/svg\" viewbox=\"0 0 109.73 100\" style=\"overflow:visible\"><path fill=\"none\" d=\"M0-15h109.73v120H0z\"/><path d=\"M54.884 70.758c1.234 0 2.435-.477 3.286-1.431l37.78-38.673c.818-.829 1.316-1.885 1.316-3.139 0-2.538-1.907-4.496-4.444-4.496-1.192 0-2.383.528-3.202 1.295L52.251 62.483h5.225l-37.38-38.169c-.808-.767-1.907-1.295-3.139-1.295-2.549 0-4.496 1.958-4.496 4.496 0 1.254.508 2.32 1.326 3.15l37.822 38.673c.891.953 1.99 1.42 3.275 1.42Z\"/></svg></summary> <ul><li><p>13+ </p> </li><li> </li><li> </li><li><strong>Niet vaak</strong><br>Vloeken of grove humor<br>Horror- en angstthema’s<br>Informatie over medische behandelingen<br>Gebruik of verwijzingen naar alcohol, tabak of drugs </li> </ul> </details></dd> </div><div><dt>Provider</dt> <dd><details><summary>Google LLC <svg xmlns=\"http://www.w3.org/2000/svg\" viewbox=\"0 0 109.73 100\" style=\"overflow:visible\"><path fill=\"none\" d=\"M0-15h109.73v120H0z\"/><path d=\"M54.884 70.758c1.234 0 2.435-.477 3.286-1.431l37.78-38.673c.818-.829 1.316-1.885 1.316-3.139 0-2.538-1.907-4.496-4.444-4.496-1.192 0-2.383.528-3.202 1.295L52.251 62.483h5.225l-37.38-38.169c-.808-.767-1.907-1.295-3.139-1.295-2.549 0-4.496 1.958-4.496 4.496 0 1.254.508 2.32 1.326 3.15l37.822 38.673c.891.953 1.99 1.42 3.275 1.42Z\"/></svg></summary> <ul><li>Google LLC heeft zichzelf geïdentificeerd als een handelaar van deze app en heeft bevestigd dat dit product of deze dienst voldoet aan de wetgeving van de Europese Unie. </li><li> </li><li><strong>Adres</strong><br>1600 Amphitheatre Parkway<br>Mountain View California 94043<br>Verenigde Staten </li><li> </li><li><strong>Telefoonnummer</strong><br>+353 14361000 </li><li> </li><li><strong>E-mailadres</strong><br>eea-support@google.com </li> </ul> </details></dd> </div><div><dt>Copyright</dt> <dd><ul><li>© 2025 Google Inc. </li> </ul></dd> </div> </dl> </section> </div> </main> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fapps.apple.com%2Fnl%2Fapp%2Fgoogle-ai-edge-gallery%2Fid6749645337\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"Download Google AI Edge Gallery van Google in de App Store. Bekijk schermafbeeldingen, beoordelingen en recensies, gebruikerstips en meer apps zoals Google AI…","image":"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource211/v4/a7/c4/20/a7c42022-eedc-887c-36c4-86270e1dcb93/Placeholder.mill/1200x630wa.jpg","authors":[{"name":"App Store","url":"https://apps.apple.com/nl/app/google-ai-edge-gallery/id6749645337","avatar":"https://apps.apple.com/assets/favicon/favicon-32.png"}],"id":"47652561","url":"https://apps.apple.com/nl/app/google-ai-edge-gallery/id6749645337","external_url":"https://news.ycombinator.com/item?id=47652561","date_published":"2026-04-05T18:45:53Z"},{"title":"Lunar Flyby","content_html":"<div class=\"page\" id=\"readability-page-1\"><article id=\"post-981732\"><div><div><p>The first flyby images of the Moon captured by NASA’s Artemis II astronauts during their historic test flight reveal regions no human has ever seen before—including a rare in-space solar eclipse. Released Tuesday, April 7, 2026, the photos were taken on April 6 during the crew’s seven‑hour pass over the lunar far side, marking humanity’s return to the Moon’s vicinity.</p></div><div><div><a href=\"https://www.nasa.gov/image-detail/art002e009288/\"><div><p>art002e009288 (April 6, 2026) – Earthset captured through the Orion spacecraft window at 6:41 p.m. EDT, April 6, 2026, during...</p></div><img src=\"https://www.nasa.gov/wp-content/uploads/2026/04/art002e009288orig.jpg?w=1024\" alt=\"art002e009288 (April 6, 2026) – Earthset captured through the Orion spacecraft window at 6:41 p.m. EDT, April 6, 2026, during the Artemis II crew’s flyby of the Moon. A muted blue Earth with bright white clouds sets behind the cratered lunar surface. The dark portion of Earth is experiencing nighttime. On Earth’s day side, swirling clouds are visible over the Australia and Oceania region. In the foreground, Ohm crater has terraced edges and a flat floor interrupted by central peaks. Central peaks form in complex craters when the lunar surface, liquefied on impact, splashes upwards during the crater’s formation.\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009281/\"><div><p>art002e009281 (April 6, 2026) – The Artemis II crew captures a portion of the Moon coming into view along the...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009281/art002e009281~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A9798.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009287/\"><div><p>art002e009287 (April 6, 2026) – Earth sets at 6:41 p.m. EDT, April 6, 2026, over the Moon’s curved limb in...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009287/art002e009287~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B0524.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-jsc2026e020504/\"><div><p>jsc2026e020504 (April 6, 2026) - The Artemis II crew – CSA (Canadian Space Agency) Astronaut Jeremy Hansen (far left) and...</p></div><img src=\"https://images-assets.nasa.gov/image/jsc2026e020504/jsc2026e020504~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"A wall in the Mission Control Center. The main central screen shows the Artemis II crew smiling and waving in the Orion capsule. A lit sign reads Mission Control Center above the screen. On either side, screens display various readouts.\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009301/\"><div><p>art002e009301 (April 6, 2026) – Captured by the Artemis II crew during their lunar flyby on April 6, 2026, this...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009301/art002e009301~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"019A0860.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009298/\"><div><p>art002e009298 (April 6, 2026) – A close-up view from the Orion spacecraft during the Artemis II crew’s lunar flyby on...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009298/art002e009298~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"In this photo, we see a glowing halo around the dark lunar disk. The science community is investigating whether this effect is due to the corona, zodiacal light, or a combination of the two. From this deep-space vantage point, the Moon appeared large enough to sustain nearly 54 minutes of totality, far longer than total solar eclipses typically seen from Earth. The bright silver glint on the left edge of the image is the planet Venus. The round, dark gray feature visible along the Moon’s horizon between the 9 and 10 o’clock positions is Mare Crisium, a feature visible from Earth. We see faint lunar features because light reflected off of Earth provides a source of illumination.\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009289/\"><div><p>art002e009289 (April 6, 2026) – The lunar surface fills the frame in sharp detail, as seen during the Artemis II...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009289/art002e009289~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B0569.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009562/\"><div><p>art002e009562 (April 6, 2026) - The Orion spacecraft is seen in the foreground lit up by the Sun. A waxing...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009562/art002e009562~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"cmasaw3_20260406191824.JPG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009294/\"><div><p>art002e009294 (April 6, 2026) – Artemis II Pilot Victor Glover, Commander Reid Wiseman, and Mission Specialist Jeremy Hansen prepare for...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009294/art002e009294~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"IMG_0250.DNG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009567/\"><div><p>art002e009567 (April 6, 2026) - NASA’s Orion spacecraft captures the Moon and the Earth in one frame during the Artemis...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009567/art002e009567~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"cmasaw3_20260406223414_017.JPG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009573/\"><div><p>art002e009573 (April 6, 2026) - The Moon, seen here backlit by the Sun during a solar eclipse on April 6,...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009573/art002e009573~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"cmasaw3_20260407011150.JPG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012183/\"><div><p>art002e012183 (April 6, 2026) - On the first shift during the lunar flyby observation period, the Artemis II crew captured...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012183/art002e012183~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7556.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009283/\"><div><p>art002e009283 (April 6, 2026) – Captured by the Artemis II crew, the heavily cratered terrain of the eastern edge of...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009283/art002e009283~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B0045.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009299/\"><div><p>art002e009299 (April 6, 2026) – Captured from the Orion spacecraft near the end of the Artemis II lunar flyby on...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009299/art002e009299~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B2552.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009302/\"><div><p>art002e009302 (April 6, 2026) – The Artemis II crew – Mission Specialist Christina Koch (top left), Mission Specialist Jeremy Hansen...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009302/art002e009302~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"IMG_0271.DNG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009284/\"><div><p>art002e009284 (April 6, 2026) – Earth appears tiny as the Moon looms large in this photo taken by the Artemis...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009284/art002e009284~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B0071.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009282/\"><div><p>art002e009282 (April 6, 2026) - A close-up view taken by the Artemis II crew of Vavilov Crater on the rim...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009282/art002e009282~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A9942.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012178/\"><div><p>art002e012178 (April 7, 2026) - A shot from early in the Artemis II lunar flyby, taken with a smaller aperture...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012178/art002e012178~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7551.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012278/\"><div><p>art002e012278 (April 6, 2026) - The Moon seen peeking above the window sill of the Orion spacecraft during the Artemis...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012278/art002e012278~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"019A0002.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012279/\"><div><p>art002e012279 (April 6, 2026) - A view from the window of the Orion spacecraft approximately 9 minutes before Earthset during...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012279/art002e012279~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"019A0005.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012028/\"><div><p>art002e012028 (April 6, 2026) - The Artemis II crew captured a close-up snapshot of the near side of the Moon...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012028/art002e012028~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7400.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009293/\"><div><p>art002e009293 (April 6, 2026) – Artemis II Pilot Victor Glover and Mission Specialist Christina Koch gather images and observations of...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009293/art002e009293~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"019A1290.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009279/\"><div><p>art002e009279 (April 6, 2026) – During their lunar flyby observation period, the Artemis II crew captured this image at 3:41...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009279/art002e009279~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7981.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-jsc2026e020490/\"><div><p>jsc2026e020490 (April 6, 2026) - Pictured from left to right, Angela Garcia, Dr. Kelsey Young, and Dr. Trevor Graff, the...</p></div><img src=\"https://images-assets.nasa.gov/image/jsc2026e020490/jsc2026e020490~large.jpg?w=1920&h=1279&fit=clip&crop=faces%2Cfocalpoint\" alt=\"The team in the Mission Control Center during the Artemis II lunar flyby, talking while looking at a number of huge screens displaying visuals from the capsule. They are sitting behind a bank of computer screens and gesturing at the larger set of screens, facing away from the camera.\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009295/\"><div><p>art002e009295 (April 6, 2026) – Astronaut Jeremy Hansen captures an image through the camera shroud covering window 2 of the...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009295/art002e009295~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"IMG_0261.DNG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012090/\"><div><p>art002e012090 (April 6, 2026) - In this view of the Moon, the Artemis II crew captured an intricate snapshot of...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012090/art002e012090~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7463.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012093/\"><div><p>art002e012093 (April 6, 2026) - Hertzsprung Basin comes into view with its distinctive two concentric rings of mountains, revealing the...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012093/art002e012093~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7466.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009571/\"><div><p>art002e009571 (April 6, 2026) - The Moon, backlit by the Sun during a solar eclipse, is photographed by NASA’s Orion...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009571/art002e009571~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"The Moon, backlit by the Sun during a solar eclipse, is photographed by NASA’s Orion spacecraft on April 6, 2026, during the Artemis II mission. Orion is visible in the foreground on the left. Earth is reflecting sunlight at the left edge of the Moon, which is slightly brighter than the rest of the disk. The bright spot visible just below the Moon’s bottom right edge is Saturn. Beyond that, the bright spot at the right edge of the image is Mars.\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009280b/\"><div><p>art002e009280 (April 6, 2026) – Earthrise captured through the Orion spacecraft window at 7:22 p.m. ET during the Artemis II...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009280b/art002e009280b~large.jpg?w=1280&h=1920&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B1036.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009566/\"><div><p>art002e009566 (April 6, 2026) - NASA’s Orion spacecraft is seen in the foreground, lit up by the Sun. A first...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009566/art002e009566~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"cmasaw3_20260406215024.JPG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012129/\"><div><p>art002e012129 (April 6, 2026) - The lower half of the Moon hangs suspended in time in this photograph from the...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012129/art002e012129~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7502.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009278/\"><div><p>art002e009278 (April 6, 2026) - Just over half of the Moon fills the left half of the image. The near...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009278/art002e009278~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7430.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009290/\"><div><p>art002e009290 (April 6, 2026) – Artemis II Commander Reid Wiseman peers out the window of the Orion spacecraft just as...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009290/art002e009290~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"017A6989.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009296/\"><div><p>(April 6, 2026) – Midway through their lunar observation period, the Artemis II crew members – Reid Wiseman, Victor Glover,...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009296/art002e009296~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"IMG_0264.DNG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009285/\"><div><p>art002e009285 (April 6, 2026) – Our planet draws closer to passing behind the Moon in this image captured by the...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009285/art002e009285~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015B0281.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009292/\"><div><p>art002e009292 (April 6, 2026) – CSA (Canadian Space Agency) astronaut and Artemis II Mission Specialist Jeremy Hansen is seen taking...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009292/art002e009292~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"019A1191.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009277/\"><div><p>art002e009277 (April 6, 2026) - In this view of the Moon, taken by the Artemis II crew at 2:19 p.m....</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009277/art002e009277~large.jpg?w=1920&h=1280&fit=clip&crop=faces%2Cfocalpoint\" alt=\"015A7244.NEF\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e009575/\"><div><p>art002e009575 (April 6, 2026) - The Sun is rising at the left edge of the Moon, ending a nearly one-hour...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e009575/art002e009575~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"cmasaw3_20260407012240_013.JPG\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-jsc2026e020501/\"><div><p>jsc2026e020501 (April 6, 2026) - NASA Flight Directors Diane Dailey, Pooja Jesrani, and Paul Konyha pictured in the White Flight...</p></div><img src=\"https://images-assets.nasa.gov/image/jsc2026e020501/jsc2026e020501~large.jpg?w=1920&h=1535&fit=clip&crop=faces%2Cfocalpoint\" alt=\"Flight directors in Mission Control clapping and smiling. They are stationed behind banks of computer screens. The back wall of windows shows darkness outside and reflects headshots of the Artemis II astronauts.\" loading=\"lazy\"></a></div><div><a href=\"https://www.nasa.gov/image-detail/amf-art002e012583/\"><div><p>art002e012495 (April 7, 2026) - The engines on the Orion spacecraft’s service module are prominently featured in this image from...</p></div><img src=\"https://images-assets.nasa.gov/image/art002e012583/art002e012583~large.jpg?w=1920&h=1440&fit=clip&crop=faces%2Cfocalpoint\" alt=\"cmasaw4_20260407133216_050.JPG\" loading=\"lazy\"></a></div></div></div></article></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.nasa.gov%2Fgallery%2Flunar-flyby%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"The first flyby images of the Moon captured by NASA’s Artemis II astronauts during their historic test flight reveal regions no human has ever seen before—including a rare in-space solar eclipse. Released Tuesday, April 7, 2026, the photos were taken on April 6 during the crew’s seven‑hour pass over the lunar far side, marking humanity’s return to the Moon’s vicinity.","image":"https://www.nasa.gov/wp-content/uploads/2026/04/art002e009288orig.jpg","authors":[{"name":"NASA","url":"https://www.nasa.gov/gallery/lunar-flyby/","avatar":"https://www.nasa.gov/wp-content/plugins/nasa-hds-core-setup/assets/favicons/favicon-16x16.png"}],"id":"47676509","url":"https://www.nasa.gov/gallery/lunar-flyby/","external_url":"https://news.ycombinator.com/item?id=47676509","date_published":"2026-04-07T15:03:18Z"},{"title":"Microsoft hasn't had a coherent GUI strategy since Petzold","content_html":"<div class=\"page\" id=\"readability-page-1\"><div> <p>A few years ago I was in a meeting with developers and someone asked a simple question: “What’s the right framework for a new Windows desktop app?”</p> <p>Dead silence. One person suggested WPF. Another said WinUI 3. A third asked if they should just use Electron. The meeting went sideways and we never did answer the question.</p> <p>That silence is the story. And the story goes back thirty-plus years.</p> <p><strong>When a platform can’t answer “how should I build a UI?” in under ten seconds, it has failed its developers. Full stop.</strong></p> <h2>The Last Time Windows Had a Clear Answer</h2> <p>In 1988, Charles Petzold published <em>Programming Windows</em>. 852 pages. Win16 API in C. And for all its bulk, it represented something remarkable: a single, coherent, authoritative answer to how you write a Windows application. In the business, we call that a ‘strategy’.</p> <p>Win32 that followed was bigger but still coherent. Message loops. Window procedures. GDI. The mental model was a bit whacky, but it was <em>one</em> mental model. Petzold explained it. It was the F=MA of Windows. Simple. Powerful. You learned it. You used it. You were successful.&#160;</p> <p>Clarity is your friend! One OS, one API, one language, one book. There was no committee debating managed-code alternatives. There was just Win32 and Petzold, and it worked. This was Physics not Chemistry (this works but only for this slice of the period table. And only under these pressures.&#160; And only within this temperature. And only if the Moon is in the 7th house of Jupiter).&#160;</p> <p>What happened next is a masterclass in how a company with brilliant people and enormous resources can produce a thirty-year boof-a-rama by optimizing for the wrong things.&#160; AKA <em>Brillant people doing stupid things.</em></p> <h2>The Object-Oriented Fever Dream (1992–2000)</h2> <p>Win32 had real limitations, so Microsoft did what Microsoft does: it shipped something new for the developer conference. Several somethings.</p> <p>MFC (1992) wrapped Win32 in C++. If Win32 was inelegant, MFC was Win32 wearing a tuxedo made of other tuxedos. Then came OLE. COM. ActiveX. None of these were really GUI frameworks – they were component architectures – but they infected every corner of Windows development and introduced a level of cognitive complexity that makes Kierkegaard read like Hemingway.&#160;</p> <p>I sat through a conference session in the late nineties trying to understand the difference between an OLE document, a COM object, and an ActiveX control. I looked at the presenter like they had a rat’s tail hanging out of his mouth for the entire hour.&#160;</p> <p>Microsoft wasn’t selling a coherent story. It was selling technology primitives and telling developers to figure out the story themselves. That’s the Conference Keynote Cluster***k – Microsoft optimized for an executive impressing people with their keynote and not the success of the users or developers.&#160;</p> <h2>PDC 2003 and the Vision That Ate Itself</h2> <p>At PDC 2003, Microsoft unveiled Longhorn – genuinely one of the most compelling technical visions the company had ever put in front of developers. Three pillars: WinFS (a relational file system), Indigo (unified communications), and Avalon – later WPF – a GPU-accelerated, vector-based UI subsystem driven by a declarative XML language called XAML. Developers saw the Avalon demos and went <em>nuts</em>. It was the right vision.</p> <p>It was also, in the words of Jim Allchin’s internal memo from January 2004, “a pig.”</p> <p>By August 2004, Microsoft announced a complete development reset. Scrapped. Start over from the Server 2003 codebase. And after the reset, leadership issued a quiet directive: no f***ing managed code in Windows. All new code in C++. WPF would ship alongside Vista, but the shell itself would not use it.</p> <p>The Windows team’s bitterness toward .NET never healed. From their perspective, gambling on a new managed-code framework had produced the most embarrassing failure in the company’s history. That bitterness created a thirteen-year institutional civil war between the Windows team and the .NET team that would ultimately orphan WPF, kill Silverlight, doom UWP, and give us the GUI ecosystem boof-a-rama we have today.</p> <h2>Silverlight: The Pattern Established (2007–2010)</h2> <p>WPF shipped in late 2006. It was remarkable – XAML, hardware-accelerated rendering, real data binding. If Microsoft had made it the definitive answer and invested relentlessly, the story might have ended differently. Instead, in 2007, they launched Silverlight: a stripped-down browser plugin to compete with Flash, cross-platform, elegant, and the foundation for Windows Phone. Around 2010 it looked like the rich client future.</p> <p>Then at MIX 2010, a Microsoft executive said in a Q&amp;A that Silverlight was not a cross-platform strategy – it was about Windows Phone. HTML5 was now policy. The Silverlight team was not told this was coming. Developers who had bet their LOB applications on Silverlight found out from a conference Q&amp;A.</p> <p>Silverlight wasn’t killed by technical failure. The technology was fine. It was killed by a business strategy decision, and developers were the last to know.</p> <p>Remember that pattern. We’ll see it again.</p> <h2>The Metro Panic and the Two-Team War (2012)</h2> <p>Apple had sold 200 million iPhones. The iPad was eating into PC sales. Microsoft’s answer was Windows 8 and Metro – a touch-first runtime called WinRT that was deliberately <em>not</em> built on .NET. Remember the Windows team’s bitterness? Here it manifests. WinRT was a native C++ runtime. Clean break from WPF, WinForms, and a decade of developer investment in .NET.</p> <p>There were actually two stories being told simultaneously inside Microsoft. The Windows team was building WinRT. The .NET team was still evangelizing WPF. Different buildings, different VPs, different road maps.</p> <p>What developers heard at //Build 2012: the future is WinRT, and also HTML+JS is first-class, and also .NET still works, and also C++ is back, and also you should write Metro apps, and also your WPF code still runs fine. That is not a strategy. That is a Hunger Games stage where six teams are fighting for your attention.</p> <p>Enterprise developers took one look at UWP’s sandboxing, its Store deployment requirement, and its missing Win32 APIs, and walked away. The framework designed to win them into the modern era had been optimized for a tablet app store that never materialized.</p> <h2>UWP and the WinUI Sprawl (2015–Present)</h2> <p>Windows 10 brought Universal Windows Platform – write once, run on PC, phone, Xbox, HoloLens. Compelling on paper. The problem: Windows Phone was dying, and Microsoft’s own flagship apps – Office, Visual Studio, the shell itself – weren’t using UWP. The message was clear even if no one said it out loud.</p> <p>When UWP stalled, the official answer became <em>it depends</em>. Use UWP for new apps, keep WPF for existing ones, add modern APIs via XAML Islands, wait for WinUI 3, but also WinUI 2 exists for UWP specifically, and Project Reunion will fix everything, except we’re renaming it Windows App SDK and it still doesn’t fully replace UWP and…</p> <p>Brilliant people doing stupid things. Technological Brownian motion.</p> <p>Project Reunion / WinUI 3 represents genuine progress. But ask yourself why the problem existed at all. UWP’s controls were tied to the OS because the Windows team owned them. The .NET team didn’t. The developer tools team didn’t. Project Reunion was an organizational workaround dressed up as a technical solution.</p> <p>One developer’s summary, written in 2024: “I’ve been following Microsoft’s constant changes: UAP, UWP, C++/CX replaced by C++/WinRT without tool support, XAML Islands, XAML Direct, Project Reunion, the restart of WinAppSDK, the chaotic switch between WinUI 2.0 and 3.0…” Fourteen years. Fourteen pivots. That person deserves a medal and an apology, in that order.</p> <h2>The Zoo Without a Zookeeper</h2> <p>Here is every GUI technology actually shipping on Windows today:</p> <p><strong>Microsoft native frameworks:</strong></p> <ul> <li><strong>Win32</strong> (1985) – Still here. Still used. Petzold’s book still applies.</li> <li><strong>MFC</strong> (1992) – C++ wrapper on Win32. Maintenance mode. Lives in enterprise and CAD.</li> <li><strong>WinForms</strong> (2002) – .NET wrapper on Win32. “Available but discouraged.” Still fastest for data-entry forms.</li> <li><strong>WPF</strong> (2006) – XAML, DirectX-rendered, open source. No new Microsoft investment.</li> <li><strong>WinUI 3 / Windows App SDK</strong> (2021) – The “modern” answer. Uncertain roadmap.</li> <li><strong>MAUI</strong> (2022) – Cross-platform successor to Xamarin.Forms. The .NET team’s current bet.</li> </ul> <p><strong>Microsoft web-hybrid:</strong></p> <ul> <li><strong>Blazor Hybrid</strong> – .NET Razor components in a native WebView.</li> <li><strong>WebView2</strong> – Embed Chromium in a Win32/WinForms/WPF app.</li> </ul> <p><strong>Third-party:</strong></p> <ul> <li><strong>Electron</strong> – Chromium + Node.js. VS Code, Slack, Discord. The most widely deployed desktop GUI technology on Windows right now – and Microsoft had nothing to do with it.</li> <li><strong>Flutter</strong> (Google) – Dart, custom renderer, cross-platform.</li> <li><strong>Tauri</strong> – Rust backend, lightweight Electron alternative.</li> <li><strong>Qt</strong> – C++/Python/JavaScript. The serious cross-platform option.</li> <li><strong>React Native for Windows</strong> – Microsoft-backed port of Facebook’s mobile framework.</li> <li><strong>Avalonia</strong> – Open source WPF spiritual successor. Used by JetBrains, GitHub, Unity – developers who stopped waiting for Microsoft.</li> <li><strong>Uno Platform</strong> – WinUI APIs on every platform. More committed to WinUI than Microsoft is.</li> <li><strong>Delphi / RAD Studio</strong> – Still alive. Still fast. Still in vertical market software.</li> <li><strong>Java Swing / JavaFX</strong> – Yes, still in production. The enterprise never forgets.</li> </ul> <p>Seventeen approaches. Five programming languages. Three rendering philosophies. That is not a platform. I might not have a dictionary definition for the term boof-a-rama but I know one when I see it.</p> <h2>The Lesson</h2> <p>Every failed GUI initiative traces back to one of three causes: internal team politics (Windows vs. .NET), a developer conference announcement driving a premature platform bet (Metro, UWP), or a business strategy pivot that orphaned developers without warning (Silverlight). None of these are technical failures. The technology was often genuinely good – WPF was good, Silverlight was good, XAML is good. The organizational failure was the product.</p> <p><strong>You either have a Plausible Theory of Success that covers the full lifecycle – adoption, investment, maintenance, and migration – or you have a developer conference keynote.</strong></p> <p>One is a strategy. The other is a thirty-year boof-a-rama.</p> <p>Charles Petzold wrote six editions of <em>Programming Windows</em> trying to keep up with each new thing Microsoft announced. He stopped after the sixth, which covered WinRT for Windows 8. That was 2012.</p> <p>I don’t blame him.</p> <figure><a href=\"https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-scaled.png\"><img decoding=\"async\" width=\"1024\" height=\"572\" src=\"https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-1024x572.png\" alt srcset=\"https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-1024x572.png 1024w, https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-300x167.png 300w, https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-768x429.png 768w, https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-1536x857.png 1536w, https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-2048x1143.png 2048w, https://www.jsnover.com/blog/wp-content/uploads/2026/03/image-3-500x279.png 500w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"></a></figure> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.jsnover.com%2Fblog%2F2026%2F03%2F13%2Fmicrosoft-hasnt-had-a-coherent-gui-strategy-since-petzold%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"A few years ago I was in a meeting with developers and someone asked a simple question: “What’s the right framework for a new Windows desktop app?” Dead silence. One person sugges…","image":"https://www.jsnover.com/blog/wp-content/uploads/2026/03/Boof.png","authors":[{"name":"Jeffrey Snover's blog","url":"https://www.jsnover.com/blog/2026/03/13/microsoft-hasnt-had-a-coherent-gui-strategy-since-petzold/"}],"id":"47651703","url":"https://www.jsnover.com/blog/2026/03/13/microsoft-hasnt-had-a-coherent-gui-strategy-since-petzold/","external_url":"https://news.ycombinator.com/item?id=47651703","date_published":"2026-04-05T17:27:41Z"},{"title":"Why Switzerland has 25 Gbit internet and America doesn't","content_html":"<div class=\"page\" id=\"readability-page-1\"><div id=\"content\" data-pagefind-body><p>You may have heard about <a href=\"https://www.init7.net/de/internet/fiber7/\" target=\"_blank\" rel=\"noopener noreferrer\">25 Gbit symmetrical internet</a> in Switzerland. This is often cited as the fastest dedicated (non-shared) residential connection in the world. However, did you ever wonder why Switzerland has such fast internet at a reasonable price while the United States and other countries like Switzerland’s neighbor Germany are falling behind?</p><p>What is the fundamental difference between the countries that leads to such a stark difference in internet speeds and prices?</p><p>Free markets, regulation, technology, or all three?</p><p>Let’s take a closer look at the situation in Switzerland, Germany, and the United States.</p><div><p><span><svg viewbox=\"0 0 512 512\"><path d=\"M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196.0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627.0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627.0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627.0 12 5.373 12 12v1e2h12c6.627.0 12 5.373 12 12v24z\"/></svg></span>Note<span><svg viewbox=\"0 0 256 512\"><path d=\"M224.3 273l-136 136c-9.4 9.4-24.6 9.4-33.9.0l-22.6-22.6c-9.4-9.4-9.4-24.6.0-33.9l96.4-96.4-96.4-96.4c-9.4-9.4-9.4-24.6.0-33.9L54.3 103c9.4-9.4 24.6-9.4 33.9.0l136 136c9.5 9.4 9.5 24.6.1 34z\"/></svg></span></p><div><p>This article is written by me and spell checked with AI. Many of the images are generated by AI. They are mostly to explain certain points and break up the wall of text.</p></div></div><p>This Article is also available as a video (My first):</p><p><iframe allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen\" loading=\"eager\" referrerpolicy=\"strict-origin-when-cross-origin\" src=\"https://www.youtube.com/embed/LkR5lUz87LA?autoplay=0&controls=1&end=0&loop=0&mute=0&start=0\" title=\"YouTube video\"></iframe></p><hr><p>As mentioned, in Switzerland, you can get <a href=\"https://www.init7.net/de/internet/fiber7/\" target=\"_blank\" rel=\"noopener noreferrer\">25 Gigabit per second fiber internet</a> to your home, symmetric and dedicated. If you don’t need such extreme speed, you can get 1 or 10 Gigabit from multiple competing providers for very little money. All over a connection that isn’t shared with your neighbors. In fact, someone could offer 100 Gigabit or more today; there is nothing preventing this other than the cost of endpoint equipment.</p><p>In the United States, if you’re lucky enough to have fiber, you might get 1 Gigabit. But often it’s shared with your neighbors. And you usually have exactly one choice of provider. Maybe two, if you count the cable company that offers slower speeds for the same price.</p><p>In Germany, you are in a somewhat similar situation to the United States. Fiber service is limited to one provider and is often shared with your neighbors.</p><p>The United States prides itself on free markets. On competition. On letting businesses fight it out. A deregulated market with no brakes.</p><p>Germany, on the other hand, is famous for over-regulation, making it difficult for businesses to operate, yet it is in a similar situation to the United States.</p><p>Switzerland has a highly regulated telecom sector with strong oversight and government-backed infrastructure projects, but regulations in Switzerland differ from those in Germany.</p><p>So why is the country that worships free markets producing stagnation, monopolies, and inferior internet, while the country with heavy regulation is producing hyper-competition, world-leading speeds, and consumer choice?</p><p>And at the same time, the country with the most regulation is suffering the same problems as the country with the least.</p><p>The answer reveals a fundamental truth about capitalism and regulation that most people get wrong.</p><hr><p><a href=\"https://sschueller.github.io/posts/the-free-market-lie/natural-monopoly.png\" rel><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/natural-monopoly.png\" alt=\"natural-monopoly\" height=\"662\" width=\"1097\"></a></p><p>To understand the failure, you have to understand what economists call a “natural monopoly.”</p><p>A natural monopoly is an industry where the cost of building the infrastructure is so high, and the cost of serving an additional customer is so low, that competition actually destroys value.</p><p>Think about water pipes. It would be insane to have three different water companies each digging up your street to lay their own pipes. You’d have three times the construction, three times the disruption, three times the cost. And at the end of it, you’d still only use one of them.</p><p>The rational solution is to build the infrastructure once, as a shared, neutral asset, and let different companies compete to provide the service over that infrastructure.</p><p>That’s how water works. That’s how electricity works in most places. And in Switzerland, that’s how fiber optic internet works.</p><p>But in the United States and Germany, they did the opposite.</p><hr><p><a href=\"https://sschueller.github.io/posts/the-free-market-lie/3-trenches.png\" rel><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/3-trenches.png\" alt=\"3 Trenches\" height=\"965\" width=\"1952\"></a></p><p>In Germany, the “free market” approach meant letting any company dig up the street to lay their own fiber. The result is called “overbuild.” Multiple networks running in parallel trenches, often just meters apart.</p><p>Billions of euros spent on redundant concrete and asphalt. Money that could have been spent on faster equipment, lower prices, or connecting rural areas, instead wasted on digging the same hole twice, literally.<sup id=\"fnref:1\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:1\" role=\"doc-noteref\">1</a></sup></p><p>But isn’t Germany heavily regulated? Yes, but the regulations focus heavily on infrastructure competition rather than duct sharing enforcement.</p><p>Germany champions infrastructure competition, meaning it prefers multiple companies laying their own cables rather than sharing a single network. At the same time, the regulatory system wastes enormous amounts of time on waiting for digging permits and on courtroom battles just to obtain basic information about existing ducts.</p><p>Germany also has a large incumbent, Deutsche Telekom, which uses existing regulations to its competitive advantage against smaller ISPs. While Germany does have laws requiring Deutsche Telekom to share its ducts with competitors, in practice smaller ISPs face unreasonable hurdles such as high fees, procedural delays, and legal double burdens that undermine effective access.</p><p>Sharing ducts is not as bad as digging two trenches but it is still a waste of resources.</p><hr><p><a href=\"https://sschueller.github.io/posts/the-free-market-lie/us-fiber.png\" rel><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/us-fiber.png\" alt=\"US Fiber\" height=\"1536\" width=\"2816\"></a></p><p>The United States took a different path, but the result is equally bad. Instead of overbuild, they got territorial monopolies, in some places paid for by the federal government.</p><p>In most American cities, you don’t have a choice of fiber providers. You have whatever incumbent happens to serve your neighborhood. Comcast has one area. Spectrum has another. AT&amp;T has a third.</p><p>This is marketed as competition. But it’s not. It’s a cartel. Each company gets its own protected territory, and consumers get no choice. If you don’t like your provider, your only alternative is often DSL from the 1990s or a cellular hotspot.</p><p>This is what happens when you let natural monopolies operate without oversight. They don’t compete on price or quality. They extract rent.</p><p>And because these networks are built on the cheap using P2MP, or shared architecture, your “gigabit” connection is shared with your entire neighborhood. At 8 PM, when everyone streams Netflix, that gigabit becomes 200 megabits. Or 100. Or less.</p><p>The provider still charges you for “gigabit.” They just don’t tell you that you’re sharing it with 31 other households.</p><p>And it gets worse. In the United States, even if a competitor wanted to challenge the incumbent, they often can’t. Because the Point of Presence, the central hub where all the fiber lines from homes converge, is private. It belongs to Comcast or AT&amp;T. Your fiber terminates in their building. A competitor can’t just install equipment there. They would have to build their own network from scratch, digging up the same streets, to reach you.</p><hr><p><a href=\"https://sschueller.github.io/posts/the-free-market-lie/open-access-colored.png\" rel><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/open-access-colored.png\" alt=\"Open Access\" height=\"1504\" width=\"2760\"></a></p><p>Now look at Switzerland. Here, the physical infrastructure, the fiber in the ground, is treated as a neutral, shared asset. It’s built once, often by a public or semi-public entity.</p><p>Every home gets a dedicated 4-strand fiber line. Point-to-Point. Not shared. Not split 32 ways.</p><p><a href=\"https://sschueller.github.io/posts/the-free-market-lie/4-fiber.png\" rel><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/4-fiber.png\" alt=\"4 Fiber\" height=\"1536\" width=\"2814\"></a></p><p>That dedicated fiber terminates in a neutral, open hub. And <em>any</em> internet service provider can connect to that hub.</p><p><a href=\"https://www.init7.net/en/internet/fiber7/\" target=\"_blank\" rel=\"noopener noreferrer\">Init7</a>, <a href=\"https://www.swisscom.ch/\" target=\"_blank\" rel=\"noopener noreferrer\">Swisscom</a>, <a href=\"https://www.salt.ch/\" target=\"_blank\" rel=\"noopener noreferrer\">Salt</a>, or a tiny local ISP, they all have equal access to the physical line that goes into your home.<sup id=\"fnref:2\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:2\" role=\"doc-noteref\">2</a></sup></p><p><a href=\"https://sschueller.github.io/posts/the-free-market-lie/oto.png\" rel><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/oto.png\" alt=\"OTO\" height=\"639\" width=\"577\"></a></p><p>This means you, the consumer, have genuine choice. When you sign up with a provider, you simply give them your OTO (Optical Termination Outlet) number, the unique identifier printed on the fiber optic plate in your home. It tells the provider exactly which fiber connection is yours. That’s it. No technician needs to visit. No one needs to dig up your street. You just call, give them the number, and within days (not always the case…), your new service is active.</p><p>And because your home has four separate fiber strands, you’re not locked into a single provider. You can have <a href=\"https://www.init7.net/en/internet/fiber7/\" target=\"_blank\" rel=\"noopener noreferrer\">Init7</a> on one strand, Swisscom on another, and a local utility on a third. You can switch providers with a phone call. You can try a new provider without canceling your old one first. The competition happens on price, speed, and customer service but not on who happens to own the cable in front of your house.</p><hr><p><a href=\"https://www.speedtest.net/result/c/f11e8bde-e5a3-4fe7-9c-13-f0ef236d0566\" target=\"_blank\" rel=\"noopener noreferrer\"><img loading=\"lazy\" src=\"https://www.speedtest.net/result/c/f11e8bde-e5a3-4fe7-9c-13-f0ef236d0566.png\" alt=\"Speedtest\"></a></p><p>In Switzerland, you can get 25 Gigabit per second fiber to your home. Today. Symmetric. Dedicated. Not shared with your neighbors.</p><p>In Switzerland, you have a choice of a dozen or more providers in most cities. Prices are competitive. Customer service matters because you can leave at any time.</p><p>In the United States, the majority of households have only one choice for high-speed internet. Speeds are lower. Prices are higher. And the technology is often a decade behind.</p><p>The “free market” promised innovation. It delivered rent-seeking. The incumbents have no incentive to upgrade because you have nowhere else to go.</p><p>American broadband prices have risen faster than inflation for decades. Speeds have increased only when a competitor, usually a municipal utility, forces the incumbent to respond.</p><p>Without competition, there is no innovation. There is only profit extraction.</p><hr><p>Switzerland didn’t arrive at this model by accident nor did it happen because telecom companies were feeling generous. It happened because regulators forced it to happen.</p><p>Back in 2008, when the industry sat down at the Round Table organized by the Federal Communications Commission, it was Swisscom, the incumbent itself, that pushed for the four-fiber Point-to-Point model. The company argued that a single fiber would create a monopoly and that regulation would be necessary.<sup id=\"fnref:3\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:3\" role=\"doc-noteref\">3</a></sup></p><p>So the standard was set. Four fibers per home. Point-to-Point. Open access for competitors on Layer 1 - the physical fiber itself.<sup id=\"fnref:4\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:4\" role=\"doc-noteref\">4</a></sup></p><p>Then, in 2020, Swisscom changed course. The company announced a new network expansion strategy, this time using P2MP, the shared model with splitters. On paper, they argued it was cheaper and faster to deploy.</p><p><a href=\"https://www.galaxus.ch/de/s1/product/planet-gepon-splitter-1x8-plc-splitt-zubehoer-netzwerk-24666302\" target=\"_blank\" rel=\"noopener noreferrer\"><img loading=\"lazy\" src=\"https://sschueller.github.io/posts/the-free-market-lie/gepon.png\" alt=\"GEPON\" height=\"966\" width=\"1273\">GEPON P2MP Splitter</a></p><p>But the effect was clear. Under the P2MP design, competitors would no longer have direct access to the physical fiber. Instead of plugging into their own dedicated fiber strand, they would have to rent access from Swisscom at a higher network layer - effectively becoming resellers of Swisscom’s infrastructure. The open, competitive matrix that had been carefully built over years would disappear.</p><p>The small ISP Init7 filed a complaint with Switzerland’s competition authority, COMCO, which later opened an investigation. In December 2020, they issued a precautionary measure: Swisscom could not continue its P2MP rollout unless it guaranteed the same Layer 1 access that the original standard provided.<sup id=\"fnref:5\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:5\" role=\"doc-noteref\">5</a></sup></p><p>Swisscom fought this all the way to the Federal Court. They lost. In 2021, the Federal Administrative Court confirmed COMCO’s measures, stating that Swisscom had failed to demonstrate “sufficient technological or economic grounds” to deviate from the established fiber standard.<sup id=\"fnref1:5\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:5\" role=\"doc-noteref\">5</a></sup> In April 2024, COMCO finalized its ruling, fining Swisscom 18 million francs for violating antitrust law.<sup id=\"fnref:6\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:6\" role=\"doc-noteref\">6</a></sup></p><div><p><span><svg viewbox=\"0 0 512 512\"><path d=\"M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196.0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627.0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627.0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627.0 12 5.373 12 12v1e2h12c6.627.0 12 5.373 12 12v24z\"/></svg></span>Note<span><svg viewbox=\"0 0 256 512\"><path d=\"M224.3 273l-136 136c-9.4 9.4-24.6 9.4-33.9.0l-22.6-22.6c-9.4-9.4-9.4-24.6.0-33.9l96.4-96.4-96.4-96.4c-9.4-9.4-9.4-24.6.0-33.9L54.3 103c9.4-9.4 24.6-9.4 33.9.0l136 136c9.5 9.4 9.5 24.6.1 34z\"/></svg></span></p><div><p>Swisscom is 51% owned by the Swiss Confederation. So, in simple terms, 51% state-owned and 49% privately/institutionally owned. Whether this makes the fine “symbolic” is a matter of opinion.</p></div></div><p>The result? Swisscom was forced to return to the four-fiber, Point-to-Point architecture it had originally championed.<sup id=\"fnref1:3\"><a href=\"https://sschueller.github.io/posts/the-free-market-lie/#fn:3\" role=\"doc-noteref\">3</a></sup> Competitors retained their direct, physical access to the fiber network. The walled garden was prevented.</p><p>Whether intended or not, the effect of Swisscom’s P2MP shift was clear: competitors would have been locked out of the physical infrastructure.</p><p>Swisscom is a bit of a walking contradiction. Being majority state-owned, it’s supposed to be a public service. But it’s also a private company, and maximizing profit benefits the state coffers. But that is something for another blog post.</p><hr><p>This is the paradox that confuses so many people.</p><p>The American and German approach of letting incumbents build monopolies, allowing wasteful overbuild, and refusing to regulate natural monopolies is often called a ‘free market.’</p><p>But it’s not free. And it’s not a market.</p><p>True capitalism requires competition. But infrastructure is a natural monopoly. If you treat it like a regular consumer product, you don’t get competition. You get waste, or you get a monopoly.</p><p>The Swiss model understands this. They built the infrastructure once, as a shared, neutral asset, and then let the market compete on the services that run over it.</p><p>That’s not anti-capitalist. It’s actually better capitalism. It directs competition to where it adds value, not to where it destroys it.</p><p>The free market doesn’t mean letting powerful incumbents do whatever they want. It means creating the conditions where genuine competition can thrive.</p><hr><p>So what can other countries learn from Switzerland? Here are the key policy changes that would help:</p><ol><li><p><strong>Mandate open access to physical infrastructure</strong> - require incumbents to share fiber ducts and dark fiber with competitors at cost-based prices. This is not “socialism” - it is how electricity and water work.</p></li><li><p><strong>Enforce Point-to-Point architecture</strong> - require that every home gets dedicated fiber strands, not shared splitters. This ensures competitors can access the physical layer, not just resell bandwidth.</p></li><li><p><strong>Create a neutral fiber standard</strong> - establish national standards that require multi-fiber deployment to every home, as Switzerland did in 2008.</p></li><li><p><strong>Empower competition authorities</strong> - give regulators like COMCO real teeth to enforce these rules. Fines must be large enough to matter.</p></li><li><p><strong>Support municipal fiber</strong> - allow cities and towns to build their own fiber networks when incumbents fail to serve residents adequately.</p></li></ol><p>If you care about faster internet and lower prices, push your representatives to support these policies. The technology exists. The money exists. What is missing is the political will to demand real competition.</p></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fsschueller.github.io%2Fposts%2Fthe-free-market-lie%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"The Free Market Lie: Why Switzerland Has 25 Gbit Internet and America Doesn't","image":"https://sschueller.github.io/posts/the-free-market-lie/title-image.png","authors":[{"name":"Stefan Schüller","url":"https://sschueller.github.io/posts/the-free-market-lie/","avatar":"https://sschueller.github.io/favicon.ico"}],"id":"47652400","url":"https://sschueller.github.io/posts/the-free-market-lie/","external_url":"https://news.ycombinator.com/item?id=47652400","date_published":"2026-04-05T18:29:47Z"},{"title":"Git commands I run before reading any code","content_html":"<div class=\"page\" id=\"readability-page-1\"><div> <header> <p>Five git commands that tell you where a codebase hurts before you open a single file. Churn hotspots, bus factor, bug clusters, and crisis patterns.</p> <small> Ally Piechowski · <time datetime=\"2026-04-08\">Apr 8, 2026</time> · 4 min read </small> <ul role=\"list\"><li><a href=\"https://piechowski.io/tags/development\">development</a></li><li><a href=\"https://piechowski.io/tags/git\">git</a></li></ul> </header> <img src=\"https://piechowski.io/post/git-commands-before-reading-code/cover_hu3f66e25b7571f7e32d40f355f31a2ca9_56928_1500x0_resize_q75_h2_box_2.webp\" alt=\"The Git Commands I Run Before Reading Any Code\" width=\"1500\" height=\"760\"> <p>The first thing I usually do when I pick up a new codebase isn’t opening the code. It’s opening a terminal and running a handful of git commands. Before I look at a single file, the commit history gives me a diagnostic picture of the project: who built it, where the problems cluster, whether the team is shipping with confidence or tiptoeing around land mines.</p> <h2 id=\"what-changes-the-most\">What Changes the Most</h2> <figure><pre tabindex=\"0\"><code data-lang=\"bash\"><span><span>git log --format<span>=</span>format: --name-only --since<span>=</span><span>\"1 year ago\"</span> <span>|</span> sort <span>|</span> uniq -c <span>|</span> sort -nr <span>|</span> head -20 </span></span></code></pre></figure><p>The 20 most-changed files in the last year. The file at the top is almost always the one people warn me about. “Oh yeah, that file. Everyone’s afraid to touch it.”</p> <p>High churn on a file doesn’t mean it’s bad. Sometimes it’s just active development. But high churn on a file that nobody wants to own is the clearest signal of codebase drag I know. That’s the file where every change is a patch on a patch. The blast radius of a small edit is unpredictable. The team pads their estimates because they know it’s going to fight back.</p> <p>A <a href=\"https://www.microsoft.com/en-us/research/publication/use-of-relative-code-churn-measures-to-predict-system-defect-density/\">2005 Microsoft Research study</a> found churn-based metrics predicted defects more reliably than complexity metrics alone. I take the top 5 files from this list and cross-reference them against the bug hotspot command below. A file that’s high-churn <em>and</em> high-bug is your single biggest risk.</p> <h2 id=\"who-built-this\">Who Built This</h2> <figure><pre tabindex=\"0\"><code data-lang=\"bash\"><span><span>git shortlog -sn --no-merges </span></span></code></pre></figure><p>Every contributor ranked by commit count. If one person accounts for 60% or more, that’s your bus factor. If they left six months ago, it’s a crisis. If the top contributor from the overall shortlog doesn’t appear in a 6-month window (<code>git shortlog -sn --no-merges --since=\"6 months ago\"</code>), I flag that to the client immediately.</p> <p>I also look at the tail. Thirty contributors but only three active in the last year. The people who built this system aren’t the people maintaining it.</p> <p>One caveat: squash-merge workflows compress authorship. If the team squashes every PR into a single commit, this output reflects who merged, not who wrote. Worth asking about the merge strategy before drawing conclusions.</p> <h2 id=\"where-do-bugs-cluster\">Where Do Bugs Cluster</h2> <figure><pre tabindex=\"0\"><code data-lang=\"bash\"><span><span>git log -i -E --grep<span>=</span><span>\"fix|bug|broken\"</span> --name-only --format<span>=</span><span>''</span> <span>|</span> sort <span>|</span> uniq -c <span>|</span> sort -nr <span>|</span> head -20 </span></span></code></pre></figure><p>Same shape as the churn command, filtered to commits with bug-related keywords. Compare this list against the churn hotspots. Files that appear on both are your highest-risk code: they keep breaking and keep getting patched, but never get properly fixed.</p> <p>This depends on commit message discipline. If the team writes “update stuff” for every commit, you’ll get nothing. But even a rough map of bug density is better than no map.</p> <h2 id=\"is-this-project-accelerating-or-dying\">Is This Project Accelerating or Dying</h2> <figure><pre tabindex=\"0\"><code data-lang=\"bash\"><span><span>git log --format<span>=</span><span>'%ad'</span> --date<span>=</span>format:<span>'%Y-%m'</span> <span>|</span> sort <span>|</span> uniq -c </span></span></code></pre></figure><p>Commit count by month, for the entire history of the repo. I scan the output looking for shapes. A steady rhythm is healthy. But what does it look like when the count drops by half in a single month? Usually someone left. A declining curve over 6 to 12 months tells you the team is losing momentum. Periodic spikes followed by quiet months means the team batches work into releases instead of shipping continuously.</p> <p>I once showed a CTO their commit velocity chart and they said “that’s when we lost our second senior engineer.” They hadn’t connected the timeline before. This is team data, not code data.</p> <h2 id=\"how-often-is-the-team-firefighting\">How Often Is the Team Firefighting</h2> <figure><pre tabindex=\"0\"><code data-lang=\"bash\"><span><span>git log --oneline --since<span>=</span><span>\"1 year ago\"</span> <span>|</span> grep -iE <span>'revert|hotfix|emergency|rollback'</span> </span></span></code></pre></figure><p>Revert and hotfix frequency. A handful over a year is normal. Reverts every couple of weeks means the team doesn’t trust its deploy process. They’re evidence of a <a href=\"https://piechowski.io/post/codebase-drag-audit/#2-deploy-fear\">deeper issue</a>: unreliable tests, missing staging, or a deploy pipeline that makes rollbacks harder than they should be. Zero results is also a signal; either the team is stable, or nobody writes descriptive commit messages.</p> <p>Crisis patterns are easy to read. Either they’re there or they’re not.</p> <hr> <p>These five commands take a couple minutes to run. They won’t tell you everything. But you’ll know which code to read first, and what to look for when you get there. That’s the difference between spending your first day reading the codebase methodically and spending it wandering.</p> <p>This is the first hour of what I do in a <a href=\"https://piechowski.io/post/how-i-audit-a-legacy-rails-codebase/\">codebase audit</a>. Here’s what the rest of the week looks like.</p> <hr> <h2>Related Articles</h2> <ul> <li><a href=\"https://piechowski.io/post/codebase-drag-audit/\">Why Your Engineering Team Is Slow (It's the Codebase, Not the People)</a></li> <li><a href=\"https://piechowski.io/post/vim-tabclose/\">How to Close a Tab in Vim</a></li> <li><a href=\"https://piechowski.io/post/how-i-audit-a-legacy-rails-codebase/\">How I Audit a Legacy Rails Codebase in the First Week</a></li> <li><a href=\"https://piechowski.io/post/vim-tabnew/\">How to Open a New Tab in Vim</a></li> <li><a href=\"https://piechowski.io/post/why-is-default-scope-bad-rails/\">Rails default_scope: Why You Should Never Use It</a></li> </ul> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fpiechowski.io%2Fpost%2Fgit-commands-before-reading-code%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"\"Five git commands that tell you where a codebase hurts before you open a single file. Churn hotspots, bus factor, bug clusters, and crisis patterns.\"","image":"https://piechowski.io/post/git-commands-before-reading-code/cover_hu3f66e25b7571f7e32d40f355f31a2ca9_56928_1200x630_resize_box_2.png","authors":[{"name":"Ally Piechowski","url":"https://piechowski.io/post/git-commands-before-reading-code/","avatar":"https://piechowski.io/favicon-32x32.png"}],"id":"47687273","url":"https://piechowski.io/post/git-commands-before-reading-code/","external_url":"https://news.ycombinator.com/item?id=47687273","date_published":"2026-04-08T08:53:42Z"},{"id":"47679258","url":"https://www-cdn.anthropic.com/53566bf5440a10affd749724787c8913a2ae0841.pdf","external_url":"https://news.ycombinator.com/item?id=47679258","title":"System Card: Claude Mythos Preview [pdf]","date_published":"2026-04-07T18:18:36Z"},{"title":"Show HN: Brutalist Concrete Laptop Stand (2024)","content_html":"<div class=\"page\" id=\"readability-page-1\"><div><p>I am a great lover of brutalist architecture. 1960’s concrete buildings may not be for everyone, but I love the aesthetic. I’ve made a laptop stand, to help me hack in true brutalist style. It has the characteristic <em>beton brut</em> (raw concrete) surface texture, and is quite possibly the heaviest laptop stand in the world. It also boasts 2 x 2.1 amp USB charge ports, a three-pin plug socket for my laptop, and an integral plant pot. Here are some of its highlights.</p> <div> <div> <p>0 / 3</p> <p><img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-1/01-concrete-laptop-stand-being-used.jpg\" alt=\"Concrete laptop stand in use\"></p><p>Concrete laptop stand in use</p> </div> <div> <p>1 / 3</p> <p><img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-1/02-concrete-laptop-stand-plug-socket.jpg\" alt=\"Plug socket and 2 USB charge ports\"></p><p>Plug socket and 2 USB charge ports</p> </div> <div> <p>2 / 3</p> <p><img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-1/03-concrete-laptop-stand-plant.jpg\" alt=\"Integral plant pot in corner of concrete laptop stand\"></p><p>Integral plant pot in corner of concrete laptop stand</p> </div> <div> <p>3 / 3</p> <p><img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-1/04-exposed-wire.jpg\" alt=\"Rusted rebar and exposed wire add to the theme of urbex and decay\"></p><p>Rusted rebar and exposed wire add to the theme of urbex and decay</p> </div> <p><a onclick=\"plusSlides(-1)\">❮</a> <a onclick=\"plusSlides(1)\">❯</a> </p></div> <br> <div> <h2 id=\"key-features\">Key Features</h2> <figure> <img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/01-concrete-laptop-stand-drawing.jpg\" alt=\"An early drawing of the laptop stand\"><figcaption> <p>An early drawing of the laptop stand</p> </figcaption> </figure> <p>The key features include:</p> <ul> <li>Brutalist style overhang</li> <li>Urban decay aesthetic with a damaged corner and rusted rebar</li> <li>3-pin plug socket</li> <li>2 x USB charge ports</li> <li>Exposed rebar rusted</li> <li>Exposed copper wire corrosion</li> <li>Integral plant pot with string of pearls plant</li> <li>Artificially rusted penpot</li> </ul> <h2 id=\"making-the-laptop-stand\">Making the Laptop Stand</h2> <p>It was a slow process, but here are some action shots of making the laptop stand:</p> <h2 id=\"the-components\">The Components</h2> <h3 id=\"concrete\">Concrete</h3> <figure> <img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/02-concrete-laptop-stand-rebar-cage.jpg\" alt=\"The rebar cage inside the form, awaiting the first pour\"><figcaption> <p>The rebar cage inside the form, awaiting the first pour</p> </figcaption> </figure> <p>There were two main pours of concrete, to do the base and the side walls. It intentionally wasn’t mixed very thoroughly, to produce areas on the surface where there was more sand or more cement. Sanding the sides has also exposed the gravel in the concrete. This help to make it look aged and weathered.</p> <p>On smaller pieces such as little plant pots or coasters, it is possible to use quick drying cement and get the bubbles out by vibrating the form with an electric toothbrush after the pour. For very large pieces such as a dining table, you need to use slow drying cement, and walk around the tabletop for ages, tapping the form with a rubber mallet to remove any air bubbles. For a medium-sized piece like this, a vibrating dildo is actually the best thing to use. Just think of it like any other power tool.</p> <h3 id=\"plant-pot\">Plant Pot</h3> <figure> <img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/06-concrete-laptop-stand-integral-plantpot.jpg\" alt=\"Integral plant pot is a tin set into the concrete\"><figcaption> <p>Integral plant pot is a tin set into the concrete</p> </figcaption> </figure> <p>The plant pot is made of a ghee tin. Four bolts were drilled through it and covered in concrete during the first pour to fix it in place. The inner pot is a grey plastic plant pot which fits perfectly in the ghee tin. I’ve chosen a string of pearls plant, because I liked the effect of a running plant hanging over the edge. It reminds me of the derelict buildings I’ve seen during urban exploration.</p> <h3 id=\"exposed-wire\">Exposed Wire</h3> <p>The exposed wire really adds a sense fo dilapidation and urban decay. This isn’t actually the live power cable, but it has been made to look like one. The real cable disappears into the concrete on the right hand side of the laptop stand, and the damaged fake cable comes out of the other side of the wall. The real power lead is strapped to the rebar cage with cable ties, but the overall effect is that it looks like the live cable is badly damaged.</p> <p>The wire had to be wrapped in kitchen paper and sprayed with ammonia and water, to produce the appropriate corrosion effect. Attempts to lower it into a little pot filled with liquid didn’t really work - the copper compounds turned the liquid blue, but it wasn’t forming a patina on the wire.</p> <p>Here’s what seems to be happening here:</p><p> $$ \\ce{Cu2+ + 2NH3 + 3H2O -&gt; Cu(OH)2 + 2NH4+} $$</p><p>The exposed rebar was first polished with a wire brush attachment on a Dremel tool, to remove the concrete and expose the metal, then it was rusted with water, salt, and hydrogen peroxide.</p> <h3 id=\"penpot\">Penpot</h3> <figure> <img src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/07-concrete-laptop-stand-can-penpot.jpg\" alt=\"Rusted penpot being painted with mold\"><figcaption> <p>Rusted penpot being painted with mold</p> </figcaption> </figure> <p>The penpot was similarly rusted with salt water and peroxide, after being scuffed up with some sandpaper. It has also had some moss added to it: acrylic paint cut with sand was added, to produce a realistic texture. Dab, don’t wipe.</p> <h2 id=\"summary\">Summary</h2> <p>I’m delighted with my laptop stand, even if the aesthetic isn’t to everyone’s taste. The themes of brutalist architecture, urban decay, and dilapidation have worked out really nicely, especially with the deliberate hole and the rusted metal. It has pride of place on a desk it had to be carried to on a trolley because of the sheer weight of the stand, but nothing worthwhile comes easy.</p> <p><img alt=\"Concrete laptop stand still in mold\" loading=\"lazy\" src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/03-concrete-laptop-stand-in-form.jpg\"> <img alt=\"Concrete laptop stand still in mold\" loading=\"lazy\" src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/04-concrete-laptop-stand-demolding.jpg\"> <img alt=\"Concrete laptop stand still in mold\" loading=\"lazy\" src=\"https://sam-burns.com/posts/concrete-laptop-stand/images/gallery-2/05-concrete-laptop-stand-gap-with-polystyrene.jpg\"></p> </div> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fsam-burns.com%2Fposts%2Fconcrete-laptop-stand%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"I made a laptop stand out of solid concrete, in the style of brutalist architecture, with themes of urbex and urban decay","image":"https://sam-burns.com/posts/concrete-laptop-stand/images/main-concrete-laptop-stand-overview.jpg","authors":[{"name":"Sam Burns' Tech Blog","url":"https://sam-burns.com/posts/concrete-laptop-stand/","avatar":"https://sam-burns.com/images/favicon/favicon.ico"}],"id":"47673360","url":"https://sam-burns.com/posts/concrete-laptop-stand/","external_url":"https://news.ycombinator.com/item?id=47673360","date_published":"2026-04-07T11:07:44Z"},{"title":"Veracrypt project update","content_html":"<div class=\"page\" id=\"readability-page-1\"><div data-off-canvas-content> <div> <div> <p><a href=\"https://sourceforge.net/\" title=\"Home\"> <img src=\"https://a.fsdn.com/con/images/sandiego/sf-logo-full.svg\" alt=\"SourceForge logo\"> </a></p> </div> <section> <nav> <a href=\"https://sourceforge.net/\" title=\"Home\"> <svg version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" x=\"0px\" y=\"0px\" viewbox=\"0 0 653 102.6\" style=\"enable-background:new 0 0 653 102.6;\" xml:space=\"preserve\"><path d=\"M66.9,54.5c0-19.1-6.8-27.8-10.4-31.1c-0.7-0.6-1.8-0.1-1.7,0.9c0.7,10.8-12.9,13.5-12.9,30.4h0    c0,0,0,0.1,0,0.1c0,10.3,7.8,18.7,17.4,18.7c9.6,0,17.4-8.4,17.4-18.7c0,0,0-0.1,0-0.1h0c0-4.8-1.8-9.4-3.6-12.8    c-0.4-0.7-1.4-0.4-1.3,0.2C75.1,56.7,66.9,65.7,66.9,54.5z\"/><g> <path d=\"M46.2,94.8c-0.4,0-0.9-0.2-1.2-0.5L0.5,49.8c-0.6-0.6-0.6-1.7,0-2.4l47-47C47.8,0.2,48.2,0,48.6,0h13.5        c0.8,0,1.3,0.5,1.5,1c0.2,0.5,0.2,1.2-0.4,1.8L19.1,47c-0.9,0.9-0.9,2.3,0,3.2L54,85.2c0.6,0.6,0.6,1.7,0,2.4l-6.7,6.8        C47,94.6,46.6,94.8,46.2,94.8z\"/></g><g> <path d=\"M55.1,102.6c-0.8,0-1.3-0.5-1.5-1c-0.2-0.5-0.2-1.2,0.4-1.8l44.2-44.2c0.4-0.4,0.7-1,0.7-1.6        c0-0.6-0.2-1.2-0.7-1.6L63.2,17.4c-0.6-0.6-0.6-1.7,0-2.4l6.8-6.8c0.3-0.3,0.7-0.5,1.2-0.5S72,8,72.3,8.3l44.4,44.5        c0.3,0.3,0.5,0.7,0.5,1.2s-0.2,0.9-0.5,1.2l-47,47c-0.3,0.3-0.7,0.5-1.2,0.5H55.1z\"/></g><g> <g> <path d=\"M167.2,32c-0.2,0.4-0.5,0.6-1,0.6c-0.3,0-0.7-0.2-1.2-0.7c-0.5-0.5-1.2-1-2-1.5c-0.9-0.6-1.9-1.1-3.2-1.5            c-1.3-0.5-2.9-0.7-4.8-0.7c-1.9,0-3.5,0.3-5,0.8c-1.4,0.5-2.6,1.3-3.6,2.2s-1.7,2-2.2,3.2c-0.5,1.2-0.8,2.5-0.8,3.8            c0,1.8,0.4,3.2,1.1,4.4c0.7,1.1,1.7,2.1,3,2.9c1.2,0.8,2.6,1.5,4.2,2c1.6,0.6,3.2,1.1,4.8,1.6c1.6,0.5,3.2,1.1,4.8,1.8            c1.6,0.6,2.9,1.5,4.2,2.4s2.2,2.2,3,3.6c0.7,1.4,1.1,3.2,1.1,5.3c0,2.2-0.4,4.2-1.1,6.1c-0.7,1.9-1.8,3.6-3.2,5            c-1.4,1.4-3.2,2.5-5.2,3.4c-2.1,0.8-4.4,1.2-7,1.2c-3.4,0-6.4-0.6-8.8-1.8c-2.5-1.2-4.6-2.9-6.5-5l1-1.6c0.3-0.4,0.6-0.5,1-0.5            c0.2,0,0.5,0.1,0.8,0.4c0.3,0.3,0.8,0.7,1.2,1.1c0.5,0.4,1.1,0.9,1.8,1.4c0.7,0.5,1.5,1,2.4,1.4c0.9,0.4,1.9,0.8,3.1,1.1            c1.2,0.3,2.5,0.4,4,0.4c2.1,0,3.9-0.3,5.5-0.9c1.6-0.6,3-1.5,4.1-2.5s2-2.4,2.6-3.8c0.6-1.5,0.9-3.1,0.9-4.7            c0-1.8-0.4-3.3-1.1-4.5c-0.7-1.2-1.7-2.2-3-3c-1.2-0.8-2.6-1.5-4.2-2c-1.6-0.5-3.2-1.1-4.8-1.6c-1.6-0.5-3.2-1.1-4.8-1.7            c-1.6-0.6-2.9-1.4-4.2-2.4c-1.2-1-2.2-2.2-3-3.7c-0.7-1.5-1.1-3.3-1.1-5.6c0-1.7,0.3-3.4,1-5c0.7-1.6,1.6-3,2.9-4.3            c1.3-1.2,2.8-2.2,4.7-3c1.9-0.7,4-1.1,6.4-1.1c2.7,0,5.1,0.4,7.3,1.3c2.1,0.9,4.1,2.2,5.9,3.9L167.2,32z\"/> <path d=\"M152.9,78.8c-3.5,0-6.6-0.6-9.1-1.9c-2.5-1.2-4.8-3-6.7-5.1l-0.3-0.3l1.3-2c0.6-0.7,1.1-0.8,1.5-0.8            c0.4,0,0.8,0.2,1.2,0.6c0.3,0.3,0.8,0.7,1.3,1.1c0.5,0.4,1.1,0.9,1.7,1.4c0.7,0.5,1.4,0.9,2.3,1.3c0.9,0.4,1.9,0.8,3,1            c1.1,0.3,2.4,0.4,3.9,0.4c2,0,3.8-0.3,5.3-0.9c1.5-0.6,2.8-1.4,3.9-2.4c1-1,1.9-2.2,2.4-3.6c0.6-1.4,0.8-2.9,0.8-4.5            c0-1.7-0.3-3.1-1-4.2c-0.7-1.1-1.6-2-2.8-2.8c-1.2-0.8-2.5-1.4-4-1.9c-1.5-0.5-3.1-1.1-4.8-1.6c-1.7-0.5-3.3-1.1-4.8-1.7            c-1.6-0.7-3.1-1.5-4.3-2.5c-1.3-1-2.3-2.4-3.1-3.9c-0.8-1.6-1.2-3.5-1.2-5.8c0-1.8,0.3-3.6,1-5.3c0.7-1.7,1.7-3.2,3-4.5            c1.3-1.3,3-2.3,4.9-3.1c1.9-0.8,4.2-1.2,6.6-1.2c2.8,0,5.3,0.4,7.5,1.3c2.2,0.9,4.2,2.3,6.1,4.1l0.3,0.3l-1.1,2.1            c-0.6,1.1-1.7,1.4-3.1,0.1c-0.5-0.4-1.1-0.9-2-1.4c-0.8-0.5-1.9-1-3.1-1.5c-1.2-0.4-2.7-0.7-4.6-0.7c-1.8,0-3.4,0.3-4.8,0.8            c-1.3,0.5-2.5,1.2-3.4,2.1c-0.9,0.9-1.6,1.9-2.1,3c-0.5,1.1-0.7,2.4-0.7,3.6c0,1.6,0.3,3,1,4c0.7,1.1,1.6,2,2.8,2.8            c1.2,0.8,2.5,1.4,4,2c1.5,0.5,3.1,1.1,4.8,1.6c1.6,0.5,3.3,1.1,4.8,1.8c1.6,0.7,3.1,1.5,4.3,2.5c1.3,1,2.3,2.3,3.1,3.8            c0.8,1.5,1.2,3.4,1.2,5.6c0,2.2-0.4,4.4-1.2,6.4c-0.8,2-1.9,3.7-3.4,5.2c-1.5,1.5-3.3,2.6-5.4,3.5            C158.1,78.3,155.6,78.8,152.9,78.8z M138.4,71.3c1.7,1.9,3.7,3.4,6,4.5c2.4,1.2,5.3,1.8,8.6,1.8c2.5,0,4.8-0.4,6.8-1.2            c2-0.8,3.6-1.9,5-3.2c1.3-1.3,2.4-3,3.1-4.8c0.7-1.8,1.1-3.8,1.1-5.9c0-2-0.4-3.7-1-5.1c-0.7-1.3-1.6-2.5-2.8-3.4            c-1.2-0.9-2.5-1.7-4-2.4c-1.5-0.6-3.1-1.2-4.7-1.8c-1.6-0.5-3.2-1.1-4.8-1.6c-1.6-0.6-3-1.3-4.3-2.1c-1.3-0.8-2.3-1.9-3.1-3.1            c-0.8-1.2-1.2-2.8-1.2-4.7c0-1.4,0.3-2.8,0.8-4.1c0.5-1.3,1.3-2.5,2.3-3.4c1-1,2.3-1.8,3.8-2.3c1.5-0.6,3.3-0.8,5.2-0.8            c1.9,0,3.6,0.2,5,0.7c1.3,0.5,2.5,1,3.3,1.6c0.9,0.6,1.6,1.1,2.1,1.6c0.6,0.5,0.8,0.5,0.8,0.5c0.1,0,0.3,0,0.4-0.3l0.7-1.3            c-1.6-1.5-3.4-2.7-5.3-3.5c-2.1-0.8-4.4-1.2-7-1.2c-2.3,0-4.4,0.4-6.2,1.1c-1.8,0.7-3.3,1.7-4.5,2.8c-1.2,1.2-2.1,2.5-2.8,4.1            c-0.6,1.5-0.9,3.1-0.9,4.8c0,2.1,0.4,3.9,1.1,5.3c0.7,1.4,1.6,2.6,2.8,3.5c1.2,0.9,2.5,1.7,4,2.3c1.5,0.6,3.1,1.2,4.7,1.7            c1.6,0.5,3.2,1,4.8,1.6c1.6,0.6,3,1.2,4.3,2.1c1.3,0.8,2.4,1.9,3.1,3.2c0.8,1.3,1.2,2.9,1.2,4.9c0,1.8-0.3,3.4-0.9,5            c-0.6,1.6-1.5,2.9-2.7,4c-1.2,1.1-2.6,2-4.3,2.7c-1.7,0.6-3.6,1-5.7,1c-1.5,0-2.9-0.2-4.2-0.5c-1.2-0.3-2.3-0.7-3.2-1.1            c-0.9-0.4-1.8-0.9-2.5-1.5c-0.7-0.5-1.3-1-1.8-1.4c-0.5-0.4-0.9-0.8-1.2-1.1c-0.3-0.3-0.5-0.3-0.5-0.3c-0.1,0-0.3,0-0.5,0.3            L138.4,71.3z\"/> </g> <g> <path d=\"M226.7,51.6c0,4-0.6,7.6-1.8,10.9c-1.2,3.3-2.9,6.1-5.1,8.4c-2.2,2.3-4.8,4.1-7.8,5.4            c-3,1.3-6.4,1.9-10.1,1.9c-3.6,0-7-0.6-10-1.9c-3-1.3-5.6-3-7.8-5.4c-2.2-2.3-3.9-5.1-5.1-8.4c-1.2-3.3-1.8-6.9-1.8-10.9            c0-4,0.6-7.6,1.8-10.9c1.2-3.3,2.9-6.1,5.1-8.4c2.2-2.3,4.8-4.1,7.8-5.4c3-1.3,6.4-1.9,10-1.9c3.7,0,7.1,0.6,10.1,1.9            c3,1.3,5.6,3,7.8,5.4c2.2,2.3,3.9,5.1,5.1,8.4C226.1,44,226.7,47.6,226.7,51.6z M222.8,51.6c0-3.6-0.5-6.9-1.5-9.8            c-1-2.9-2.4-5.3-4.2-7.3c-1.8-2-4-3.5-6.6-4.6c-2.6-1.1-5.4-1.6-8.5-1.6c-3.1,0-5.9,0.5-8.5,1.6c-2.6,1.1-4.8,2.6-6.6,4.6            c-1.8,2-3.3,4.4-4.3,7.3c-1,2.9-1.5,6.1-1.5,9.8c0,3.6,0.5,6.9,1.5,9.8c1,2.9,2.4,5.3,4.3,7.3c1.8,2,4,3.5,6.6,4.6            c2.6,1.1,5.4,1.6,8.5,1.6c3.1,0,6-0.5,8.5-1.6c2.6-1,4.8-2.6,6.6-4.6c1.8-2,3.2-4.4,4.2-7.3C222.3,58.5,222.8,55.3,222.8,51.6z\"/> <path d=\"M202,78.7c-3.7,0-7.2-0.7-10.2-1.9c-3.1-1.3-5.8-3.1-8-5.5c-2.2-2.4-4-5.2-5.2-8.6c-1.2-3.3-1.9-7.1-1.9-11.1            c0-4,0.6-7.8,1.9-11.1c1.2-3.3,3-6.2,5.2-8.6c2.2-2.4,4.9-4.2,8-5.5c3.1-1.3,6.5-2,10.2-2c3.8,0,7.2,0.7,10.3,1.9            c3.1,1.3,5.8,3.1,8,5.5c2.2,2.4,4,5.3,5.2,8.6c1.2,3.3,1.8,7,1.8,11.1c0,4.1-0.6,7.8-1.8,11.1c-1.2,3.3-3,6.2-5.2,8.6            c-2.2,2.4-4.9,4.2-8,5.5C209.2,78.1,205.7,78.7,202,78.7z M202,25.7c-3.5,0-6.8,0.6-9.8,1.9c-2.9,1.2-5.5,3-7.6,5.2            c-2.1,2.2-3.8,5-4.9,8.2c-1.2,3.2-1.8,6.8-1.8,10.7c0,3.9,0.6,7.5,1.8,10.7c1.2,3.2,2.8,5.9,4.9,8.2c2.1,2.2,4.7,4,7.6,5.2            c2.9,1.2,6.2,1.8,9.8,1.8c3.6,0,6.9-0.6,9.8-1.8c2.9-1.2,5.5-3,7.6-5.2c2.1-2.2,3.8-5,4.9-8.1c1.2-3.2,1.8-6.8,1.8-10.7            c0-3.9-0.6-7.5-1.8-10.7c-1.2-3.2-2.8-5.9-4.9-8.2c-2.1-2.2-4.7-4-7.6-5.2C208.9,26.3,205.6,25.7,202,25.7z\"/> </g> <g> <path d=\"M256.4,74.9c2.5,0,4.7-0.4,6.7-1.3c2-0.9,3.6-2.1,5-3.6c1.4-1.5,2.4-3.4,3.1-5.4c0.7-2.1,1.1-4.3,1.1-6.8            V25.7h3.7v32.1c0,2.9-0.5,5.5-1.4,8c-0.9,2.5-2.2,4.6-3.9,6.5c-1.7,1.8-3.8,3.3-6.2,4.3c-2.4,1-5.2,1.6-8.2,1.6            c-3,0-5.8-0.5-8.2-1.6c-2.4-1.1-4.5-2.5-6.2-4.3c-1.7-1.8-3-4-3.9-6.5c-0.9-2.5-1.4-5.2-1.4-8V25.7h3.8v32c0,2.4,0.4,4.7,1.1,6.8            c0.7,2.1,1.8,3.9,3.1,5.4c1.4,1.5,3,2.7,5,3.6C251.6,74.5,253.9,74.9,256.4,74.9z\"/> <path d=\"M256.4,78.8c-3.1,0-5.9-0.5-8.4-1.6c-2.5-1.1-4.7-2.6-6.4-4.5c-1.7-1.9-3.1-4.2-4-6.7            c-0.9-2.5-1.4-5.3-1.4-8.2V25.1h5v32.7c0,2.3,0.4,4.5,1,6.6c0.7,2,1.7,3.8,3,5.2c1.3,1.5,2.9,2.6,4.8,3.5c1.9,0.8,4,1.3,6.4,1.3            c2.4,0,4.6-0.4,6.4-1.2c1.9-0.8,3.5-2,4.8-3.5c1.3-1.5,2.3-3.2,3-5.2c0.7-2,1-4.2,1-6.6V25.1h5v32.7c0,2.9-0.5,5.7-1.4,8.2            c-0.9,2.5-2.3,4.8-4,6.7c-1.7,1.9-3.9,3.4-6.4,4.5C262.3,78.3,259.5,78.8,256.4,78.8z M237.3,26.3v31.5c0,2.8,0.4,5.4,1.3,7.8            c0.9,2.4,2.1,4.5,3.8,6.3c1.6,1.8,3.6,3.2,6,4.2c2.3,1,5,1.5,8,1.5c2.9,0,5.6-0.5,8-1.5c2.3-1,4.4-2.4,6-4.2            c1.6-1.8,2.9-3.9,3.8-6.3c0.9-2.4,1.3-5,1.3-7.8V26.3h-2.5v31.5c0,2.5-0.4,4.8-1.1,7c-0.7,2.2-1.8,4.1-3.3,5.7            c-1.4,1.6-3.2,2.9-5.2,3.8c-2,0.9-4.4,1.4-6.9,1.4c-2.6,0-4.9-0.5-6.9-1.4c-2-0.9-3.8-2.2-5.2-3.8c-1.4-1.6-2.5-3.5-3.2-5.7            c-0.7-2.1-1.1-4.5-1.1-7V26.3H237.3z\"/> </g> <g> <path d=\"M297.5,51.3c1,0,0.9,0,0.9,0l2.2,0c2.3,0,4.4-0.3,6.2-0.8c1.8-0.6,3.4-1.3,4.6-2.4c1.3-1,2.2-2.3,2.9-3.7            c0.7-1.4,1-3.1,1-4.9c0-3.7-1.2-6.4-3.6-8.2c-2.4-1.8-5.9-2.7-10.6-2.7h-9.5v22.7v2.8v23.5h-3.7V25.7h13.2c6,0,10.5,1.2,13.4,3.5            c3,2.3,4.4,5.7,4.4,10.2c0,2-0.3,3.8-1,5.4c-0.7,1.6-1.7,3.1-3,4.3c-1.3,1.2-2.8,2.3-4.6,3c-1.8,0.8-3.9,1.3-6.1,1.6            c0.6,0.4,1.1,0.9,1.6,1.5l17.9,22.4h-3.3c-0.4,0-0.7-0.1-1-0.2c-0.3-0.1-0.6-0.4-0.8-0.7l-16.6-21c-0.4-0.5-0.9-0.9-1.3-1.1            c-0.5-0.2-3.4-0.3-4.4-0.3C296.3,51.6,296.7,51.3,297.5,51.3z\"/> <path d=\"M325,78.2h-4.5c-0.5,0-0.9-0.1-1.3-0.3c-0.4-0.2-0.7-0.5-1-0.9l-16.6-21c-0.4-0.5-0.7-0.8-1.1-1            c-0.4-0.1-2.8-0.3-4.1-0.3h-0.6v-2.6c0-0.9,0.2-1.4,1.8-1.4c0.9,0,1,0,1,0l2.2,0c2.2,0,4.2-0.3,6-0.8c1.7-0.5,3.2-1.3,4.4-2.3            c1.2-1,2.1-2.1,2.7-3.5c0.6-1.4,0.9-2.9,0.9-4.6c0-3.5-1.1-6-3.4-7.7c-2.3-1.7-5.7-2.6-10.2-2.6h-8.9v48.9h-5V25.1h13.9            c6.1,0,10.7,1.2,13.8,3.6c3.1,2.4,4.7,6,4.7,10.7c0,2.1-0.4,4-1.1,5.7c-0.7,1.7-1.8,3.2-3.1,4.5c-1.3,1.3-3,2.3-4.8,3.2            c-1.5,0.6-3.1,1.1-4.9,1.4c0.2,0.2,0.4,0.4,0.6,0.7L325,78.2z M296.9,53.5c1.1,0,3.4,0.1,4,0.4c0.6,0.3,1.1,0.7,1.6,1.3l16.6,21            c0.2,0.3,0.4,0.5,0.6,0.6c0.2,0.1,0.4,0.2,0.7,0.2h2l-17.1-21.4c-0.4-0.6-0.9-1-1.4-1.3l-1.5-0.9l1.8-0.2c2.2-0.2,4.2-0.7,5.9-1.5            c1.7-0.8,3.2-1.7,4.5-2.9c1.2-1.2,2.2-2.5,2.8-4.1c0.6-1.6,1-3.3,1-5.2c0-4.3-1.4-7.5-4.2-9.7c-2.8-2.2-7.2-3.3-13-3.3h-12.6V77            h2.5V28h10.1c4.7,0,8.4,0.9,10.9,2.8c2.6,1.9,3.9,4.8,3.9,8.7c0,1.9-0.4,3.6-1,5.1c-0.7,1.5-1.7,2.8-3.1,3.9            c-1.3,1.1-2.9,1.9-4.8,2.5c-1.9,0.6-4,0.9-6.4,0.9l-2.2,0c-0.1,0-0.2,0-0.9,0C297.3,51.9,297,51.9,296.9,53.5z\"/> </g> <g> <path d=\"M367.6,68.8c0.2,0,0.5,0.1,0.6,0.3l1.5,1.6c-1.1,1.1-2.2,2.2-3.5,3.1c-1.3,0.9-2.7,1.7-4.2,2.3            c-1.5,0.6-3.2,1.1-4.9,1.5c-1.8,0.4-3.8,0.5-5.9,0.5c-3.6,0-6.9-0.6-9.9-1.9c-3-1.3-5.6-3-7.7-5.4c-2.1-2.3-3.8-5.1-5-8.4            c-1.2-3.3-1.8-6.9-1.8-10.9c0-3.9,0.6-7.5,1.9-10.8c1.2-3.3,3-6,5.2-8.4c2.2-2.3,4.9-4.1,8-5.4c3.1-1.3,6.6-1.9,10.3-1.9            c1.9,0,3.6,0.1,5.2,0.4c1.6,0.3,3,0.7,4.4,1.2c1.4,0.5,2.6,1.2,3.8,2c1.2,0.8,2.4,1.7,3.5,2.7l-1.1,1.6c-0.2,0.3-0.5,0.4-0.9,0.4            c-0.2,0-0.5-0.1-0.8-0.4c-0.3-0.3-0.8-0.6-1.3-1c-0.5-0.4-1.2-0.8-1.9-1.2c-0.7-0.5-1.6-0.9-2.7-1.2c-1-0.4-2.2-0.7-3.6-1            c-1.3-0.3-2.9-0.4-4.6-0.4c-3.2,0-6.1,0.5-8.7,1.6c-2.6,1.1-4.9,2.6-6.8,4.7c-1.9,2-3.4,4.5-4.5,7.3s-1.6,6.1-1.6,9.7            c0,3.7,0.5,6.9,1.6,9.8c1.1,2.9,2.5,5.3,4.4,7.3c1.9,2,4.1,3.5,6.6,4.6c2.5,1.1,5.3,1.6,8.2,1.6c1.9,0,3.5-0.1,5-0.4            c1.5-0.2,2.8-0.6,4-1.1c1.2-0.5,2.4-1.1,3.4-1.8c1.1-0.7,2.1-1.5,3.1-2.5c0.1-0.1,0.2-0.2,0.3-0.2            C367.3,68.9,367.5,68.8,367.6,68.8z\"/> <path d=\"M351.1,78.8c-3.7,0-7.1-0.7-10.1-1.9c-3.1-1.3-5.7-3.1-7.9-5.5c-2.2-2.4-3.9-5.2-5.1-8.6            c-1.2-3.3-1.8-7.1-1.8-11.1c0-4,0.6-7.7,1.9-11c1.3-3.3,3.1-6.2,5.3-8.6c2.3-2.4,5.1-4.3,8.2-5.6c3.2-1.3,6.7-2,10.6-2            c1.9,0,3.7,0.1,5.3,0.4c1.6,0.3,3.1,0.7,4.5,1.2c1.4,0.5,2.7,1.2,3.9,2c1.2,0.8,2.4,1.7,3.6,2.8l0.4,0.4l-1.4,2.1            c-0.2,0.3-0.6,0.7-1.4,0.7c-0.4,0-0.7-0.2-1.2-0.5c-0.3-0.3-0.8-0.6-1.3-0.9c-0.5-0.4-1.1-0.8-1.9-1.2c-0.7-0.4-1.6-0.8-2.6-1.2            c-1-0.4-2.2-0.7-3.5-0.9c-1.3-0.2-2.8-0.4-4.5-0.4c-3.1,0-5.9,0.5-8.5,1.6c-2.5,1.1-4.8,2.6-6.6,4.5c-1.8,1.9-3.3,4.3-4.3,7.1            c-1,2.8-1.6,6-1.6,9.4c0,3.6,0.5,6.8,1.5,9.6c1,2.8,2.4,5.2,4.2,7.1c1.8,1.9,3.9,3.4,6.4,4.4c2.4,1,5.1,1.5,8,1.5            c1.8,0,3.5-0.1,4.9-0.4c1.4-0.2,2.7-0.6,3.9-1.1c1.2-0.5,2.3-1.1,3.3-1.7c1-0.7,2-1.5,3-2.4c0.2-0.2,0.3-0.2,0.5-0.3            c0.5-0.3,1.3-0.2,1.7,0.3l1.9,2l-0.4,0.4c-1.1,1.2-2.3,2.2-3.6,3.2c-1.3,0.9-2.7,1.8-4.3,2.4c-1.5,0.7-3.2,1.2-5.1,1.5            C355.3,78.6,353.3,78.8,351.1,78.8z M352.2,25.7c-3.7,0-7.1,0.6-10.1,1.9c-3,1.2-5.7,3-7.8,5.3c-2.2,2.3-3.9,5-5.1,8.2            c-1.2,3.2-1.8,6.7-1.8,10.6c0,3.9,0.6,7.5,1.8,10.7c1.2,3.2,2.8,5.9,4.9,8.2c2.1,2.2,4.6,4,7.5,5.2c2.9,1.2,6.1,1.8,9.6,1.8            c2.1,0,4-0.2,5.8-0.5c1.7-0.3,3.4-0.8,4.8-1.5c1.5-0.6,2.8-1.4,4-2.3c1.1-0.8,2.1-1.7,3-2.6l-1.1-1.2c-0.1-0.1-0.2-0.1-0.3,0            c-0.1,0-0.2,0.1-0.3,0.2c-1,0.9-2.1,1.8-3.2,2.5c-1.1,0.7-2.3,1.4-3.5,1.9c-1.3,0.5-2.7,0.9-4.1,1.1c-1.5,0.2-3.2,0.4-5.1,0.4            c-3,0-5.9-0.6-8.5-1.6c-2.6-1.1-4.9-2.7-6.8-4.7c-1.9-2-3.4-4.6-4.5-7.5c-1.1-2.9-1.6-6.3-1.6-10c0-3.6,0.5-6.9,1.6-9.9            c1.1-2.9,2.6-5.5,4.6-7.5c2-2.1,4.3-3.7,7-4.8c2.7-1.1,5.7-1.7,8.9-1.7c1.7,0,3.3,0.1,4.7,0.4c1.4,0.3,2.6,0.6,3.7,1            c1.1,0.4,2,0.8,2.8,1.3c0.8,0.5,1.4,0.9,1.9,1.3c0.5,0.4,1,0.7,1.3,1c0.3,0.3,0.5,0.3,0.5,0.3c0.3,0,0.4-0.1,0.4-0.2l0.8-1.2            c-1-0.9-2-1.6-3-2.3c-1.2-0.8-2.4-1.4-3.7-1.9c-1.3-0.5-2.8-0.9-4.3-1.2C355.7,25.9,354,25.7,352.2,25.7z\"/> </g> <g> <path d=\"M410.3,25.7v3.1H383v21h22.7v3H383v21.6h27.3v3.1h-31.1V25.7H410.3z\"/> <path d=\"M410.9,78.2h-32.3V25.1h32.3v4.3h-27.3v19.7h22.7v4.3h-22.7v20.4h27.3V78.2z M379.8,77h29.9v-1.9h-27.3V52.2            h22.7v-1.8h-22.7V28.2h27.3v-1.9h-29.9V77z\"/> </g> <g> <path d=\"M456.8,25.1V33h-23.5v15.7h19.8v7.9h-19.8v21.6h-9.9v-53H456.8z\"/> </g> <g> <path d=\"M514.3,51.6c0,3.9-0.6,7.5-1.9,10.8c-1.3,3.3-3.1,6.2-5.5,8.6c-2.3,2.4-5.2,4.3-8.5,5.7c-3.3,1.4-7,2-11,2            c-4,0-7.7-0.7-11-2c-3.3-1.4-6.1-3.2-8.5-5.7c-2.4-2.4-4.2-5.3-5.5-8.6s-1.9-6.9-1.9-10.8s0.6-7.5,1.9-10.8            c1.3-3.3,3.1-6.2,5.5-8.6c2.4-2.4,5.2-4.3,8.5-5.7c3.3-1.4,7-2,11-2c4,0,7.7,0.7,11,2.1c3.3,1.4,6.1,3.3,8.5,5.7            c2.3,2.4,4.2,5.3,5.5,8.6C513.6,44.1,514.3,47.7,514.3,51.6z M504.2,51.6c0-2.9-0.4-5.5-1.2-7.8c-0.8-2.3-1.9-4.3-3.3-5.9            c-1.4-1.6-3.2-2.8-5.3-3.7c-2.1-0.9-4.4-1.3-7-1.3c-2.6,0-4.9,0.4-7,1.3c-2.1,0.9-3.8,2.1-5.3,3.7c-1.5,1.6-2.6,3.6-3.4,5.9            c-0.8,2.3-1.2,4.9-1.2,7.8s0.4,5.5,1.2,7.8c0.8,2.3,1.9,4.3,3.4,5.9c1.5,1.6,3.2,2.8,5.3,3.7c2.1,0.9,4.4,1.3,7,1.3            c2.6,0,4.9-0.4,7-1.3c2.1-0.9,3.8-2.1,5.3-3.7c1.4-1.6,2.5-3.6,3.3-5.9C503.8,57.1,504.2,54.5,504.2,51.6z\"/> </g> <g> <path d=\"M534.9,50.4l2.3,0c1.9,0,3.5-0.2,4.9-0.7c1.4-0.5,2.5-1.1,3.4-1.9c0.9-0.8,1.6-1.8,2-2.9            c0.4-1.1,0.7-2.4,0.7-3.7c0-2.7-0.9-4.8-2.7-6.2c-1.8-1.4-4.5-2.2-8.1-2.2H531v17.6v7.1v20.7h-9.9v-53h16.2c3.6,0,6.7,0.4,9.3,1.1            c2.6,0.7,4.7,1.8,6.3,3.1c1.6,1.3,2.9,3,3.6,4.8c0.8,1.9,1.2,3.9,1.2,6.2c0,1.8-0.3,3.5-0.8,5.1c-0.5,1.6-1.3,3-2.3,4.3            c-1,1.3-2.2,2.4-3.7,3.4c-1.5,1-3.1,1.8-5,2.3c1.2,0.7,2.3,1.7,3.2,3l13.3,19.6h-8.9c-0.9,0-1.6-0.2-2.2-0.5            c-0.6-0.3-1.1-0.8-1.5-1.5c0,0-11.1-17-11.1-17c-0.3-0.4-0.9-1.3-1.5-1.4c-1.2,0-2.4,0-3.5,0c0,0,0-6,0-6.4            C533.8,50.4,534.9,50.4,534.9,50.4z\"/> </g> <g> <path d=\"M591.4,70.9c2.2,0,4.2-0.2,5.8-0.6c1.6-0.4,3.2-1,4.7-1.7v-12h-6.6c-0.6,0-1.1-0.2-1.5-0.5            c-0.4-0.4-0.6-0.8-0.6-1.3v-5.6h17.6V73c-1.3,1-2.7,1.8-4.2,2.5c-1.5,0.7-3,1.3-4.7,1.8c-1.7,0.5-3.4,0.8-5.3,1            c-1.9,0.2-3.9,0.3-6.1,0.3c-3.9,0-7.4-0.7-10.7-2c-3.3-1.3-6.1-3.2-8.4-5.6c-2.4-2.4-4.2-5.3-5.6-8.6c-1.3-3.3-2-7-2-10.9            c0-4,0.6-7.6,1.9-11c1.3-3.3,3.1-6.2,5.5-8.6c2.4-2.4,5.3-4.3,8.7-5.6c3.4-1.3,7.2-2,11.4-2c4.3,0,8.1,0.6,11.2,1.9            c3.2,1.3,5.8,3,8,5l-2.9,4.5c-0.6,0.9-1.3,1.4-2.2,1.4c-0.6,0-1.2-0.2-1.8-0.6c-0.8-0.5-1.6-0.9-2.4-1.4c-0.8-0.5-1.7-0.9-2.7-1.2            c-1-0.3-2.1-0.6-3.3-0.8c-1.2-0.2-2.7-0.3-4.3-0.3c-2.6,0-5,0.4-7.1,1.3c-2.1,0.9-3.9,2.1-5.4,3.8c-1.5,1.6-2.6,3.6-3.4,5.9            c-0.8,2.3-1.2,4.9-1.2,7.7c0,3.1,0.4,5.8,1.3,8.2c0.9,2.4,2.1,4.4,3.6,6s3.4,2.9,5.5,3.8S588.9,70.9,591.4,70.9z\"/> </g> <g> <path d=\"M645.7,56.8h-16.1v13.4H653v7.9h-33.4v-53H653V33h-23.5v16.3H648v5.8C648,55.1,647.9,56.8,645.7,56.8z\"/> </g></g></svg> </a> </nav> </section> </div> <section id=\"page-body\"> <div id=\"top_nav\"> <ul> <li> <a href=\"https://sourceforge.net/projects/veracrypt/\"> Summary </a> </li> <li> <a href=\"https://sourceforge.net/projects/veracrypt/files/\"> Files </a> </li> <li> <a href=\"https://sourceforge.net/projects/veracrypt/reviews/\"> Reviews </a> </li> <li> <a href=\"https://sourceforge.net/projects/veracrypt/support\"> Support </a> </li> <li> <a href=\"https://sourceforge.net/p/veracrypt/code/\"> Source Code </a> </li> <li> <a href=\"https://sourceforge.net/p/veracrypt/discussion/\"> Forums </a> </li> <li> <a href=\"https://sourceforge.net/p/veracrypt/tickets/\"> Tickets </a> </li> <li> <a href=\"https://veracrypt.jp/en/Documentation.html\" rel=\"nofollow\"> Documentation </a> </li> <li> <a href=\"https://veracrypt.jp/en/FAQ.html\" rel=\"nofollow\"> FAQ </a> </li> <li> <a href=\"https://veracrypt.jp/en/Donation.html\" rel=\"nofollow\"> Donate </a> </li> <li> <a href=\"https://veracrypt.jp/en/Donation.html\" rel=\"nofollow\"> Faire un don </a> </li> <li> <a href=\"https://sourceforge.net/p/veracrypt/mailman/\"> Mailing Lists </a> </li> <li> <a href=\"https://sourceforge.net/p/veracrypt/wiki/\"> Wiki </a> </li> </ul> </div> <div id=\"content_base\"> <p><a id=\"sidebar-activate\" href=\"https://sourceforge.net/p/veracrypt/discussion/general/thread/9620d7a4b3/#\"> <span>Menu</span> <span>▾</span> <span>▴</span> </a></p> <div> <h2> <span>Project Update</span>  <small> <a href=\"https://sourceforge.net/p/veracrypt/discussion/general/thread/9620d7a4b3/feed.rss\" rel=\"nofollow\" title=\"Follow This\"><i></i></a> </small>  </h2> <div> <p><label>Created:</label> <span title=\"Mon Mar 30, 2026 03:10 PM UTC\"> 2026-03-30 </span> </p> <p><label>Updated:</label> <span title=\"Tue Apr 07, 2026 01:36 PM UTC\"> 1 day ago </span> </p> </div> <div> <div id=\"comment\"> <ul>  <li> <div id=\"3470\"> <div> <p> <img src=\"https://a.fsdn.com/allura/u/idrassi/user_icon?1684442984\" srcset=\"https://a.fsdn.com/allura/u/idrassi/user_icon?w=72&1684442984 1.5x\n        ,\n            https://a.fsdn.com/allura/u/idrassi/user_icon?w=96&1684442984 2x\" alt=\"Mounir IDRASSI\" title=\"Mounir IDRASSI\"> </p> <div><p>Hi everyone,</p> <p>I want to share an update following my absence over the past few months.</p> <p>I have encountered some challenges but the most serious one is that Microsoft terminated the account I have used for years to sign Windows drivers and the bootloader. You can see below a screenshot of the message shown when I tried to sign in.</p> <p>Microsoft did not send me any emails or prior warnings. I have received no explanation for the termination and their message indicates that no appeal is possible.</p> <p>I have tried to contact Microsoft through various channels but I have only received automated replies and bots. I was unable to reach a human.</p> <p>This termination impacts my work beyond VeraCrypt and has consequences for my daily job.</p> <p>Currently I'm out of options.</p> <p>Regarding VeraCrypt, I cannot publish Windows updates. Linux and macOS updates can still be done but Windows is the platform used by the majority of users and so the inability to deliver Windows releases is a major blow to the project.</p> <p>I'm open to proposals and help.</p> <p><img alt=\"Microsoft Termination\" rel=\"nofollow\" src=\"https://veracrypt.jp/MS_Termination.png\"></p></div> </div> <ul> <li> </li> </ul> </div>  <ul>  <li> <div id=\"3470/56c8\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"Marty\" title=\"Marty\"> </p> <div><p>Some practical questions about the most current Windows release, until this situation can be resolved:</p> <p>The current version 1.26.24 is signed with the 2011 CA, which is soon to expire. This will certainly affect secureboot.... but how will this affect mounting non-system volumes (partitions and/or file containers) as a user? Will one have to disable secureboot just to install VeraCrypt, even if not using system encryption? And how will this affect portable use?</p> <p>The same question applies to unsigned versions people may choose to build for themselves for Windows.</p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>   <li> <div id=\"3470/44df\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"AJ B\" title=\"AJ B\"> </p> <div> <div><p>Hi Mounir,</p> <p>I’m so sorry to hear about this. I would try contacting Microsoft using the link below. There is a link to “Help with the Microsoft account recovery form” on this page:</p> <p><a href=\"https://support.microsoft.com/en-us/account-billing/get-help-with-your-microsoft-account-ace6f3b3-e2d3-aeb1-6b96-d2e9e7e52133\" rel=\"nofollow\">https://support.microsoft.com/en-us/account-billing/get-help-with-your-microsoft-account-ace6f3b3-e2d3-aeb1-6b96-d2e9e7e52133</a></p> <p>There is also a link to “I need to talk to a customer support agent” on that page. Apologies if you have already tried these links.</p> <p>Alex R’s kind suggestion of posting to Reddit and Twitter (now X) are great suggestions too since this will likely get you re-directed to the right people.</p> <p>If your account has been disabled for more than 30 days it could be unrecoverable. You may need to set up a new account and start the verification process all over again to have your new account enabled for driver signing purposes.</p> <p>I hope this is somewhat helpful. Thanks.</p></div>&#160; <p><small>Last edit: AJ B 4 hours ago</small> </p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul>  </li>  </ul> <ul>  <li> <div id=\"060f\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"Alex R\" title=\"Alex R\"> </p> <div><p>Would you be OK if I posted this to socials, such as Microsoft's reddit or twitter accounts? Might get some traction.</p></div> </div> <ul> <li> </li> </ul> </div>  <ul>  <li> <div id=\"060f/c890\"> <div> <p> <img src=\"https://a.fsdn.com/allura/u/idrassi/user_icon?1684442984&w=32\" srcset=\"https://a.fsdn.com/allura/u/idrassi/user_icon?w=48&1684442984 1.5x\n        ,\n            https://a.fsdn.com/allura/u/idrassi/user_icon?w=64&1684442984 2x\" alt=\"Mounir IDRASSI\" title=\"Mounir IDRASSI\"> </p> <div><p>yes, no problem. I don't have much social presence so this can be helpful. Thanks.</p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul>  </li>  </ul> <ul>  <li> <div id=\"7c96\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"风之暇想\" title=\"风之暇想\"> </p> <div><p>In view of this situation, I recommend adding a signature-independent program that provides archive-like creation and extraction functions (without support for real-time modification), so as to cope with highly targeted scenarios.</p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul> <ul>  <li> <div id=\"b275\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"Gary Marks\" title=\"Gary Marks\"> </p> <div><p>This is a sad turn of events, Mounir! This may seem a bit out in left field (to use an American idiom), but is it possible that some seemingly minor aspect of your recent relocation to Japan is at the root of this inexplicable account revocation?</p> <p>Grasping at straws is a hobby of mine :)</p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul> <ul>  <li> <div id=\"d60f\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"Phoenix\" title=\"Phoenix\"> </p> <div><p>I'm really sorry to hear this bad news. Someone probably reported the software, claiming it could be used for illegal activities, which led to the account being deleted. Unfortunately, the general trend is increasingly toward controlling and monitoring what people do, and there is less and less respect for privacy.</p> <p>Is there no way to get around this limitation, even temporarily? Perhaps by restricting the software, for now, to non-system partitions and volumes?</p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul> <ul>  <li> <div id=\"0022\"> <div> <p> <img src=\"https://a.fsdn.com/allura/u/enigma2illusion/user_icon?1629569553\" srcset=\"https://a.fsdn.com/allura/u/enigma2illusion/user_icon?w=72&1629569553 1.5x\n        ,\n            https://a.fsdn.com/allura/u/enigma2illusion/user_icon?w=96&1629569553 2x\" alt=\"Enigma2Illusion\" title=\"Enigma2Illusion\"> </p> <div><p><a href=\"https://sourceforge.net/u/idrassi/profile/\">@idrassi</a></p> <p>I would try sending an email to Microsoft CEO, Satya Nadella at:</p> <p>satyan@microsoft.com</p> <p>Use brief details from your first post in this thread along with the error message screenshot.</p> <p>Possible email shown below.</p> <hr> <p>Subject Line: Reinstate Partner Center Program Account for my Software Developer Business</p> <p>Dear Satya Nadella,</p> <p>I need someone from your staff's to help me get my Partner Center program account reinstated for my Windows developer business.</p> <p>I attempted to login to my Partner Center account and I received the following error message.</p> <p><code>insert your screenshot of error here</code></p> <p>I did not receive from Microsoft any email notices or prior warnings that there is an issue with my Partner Center account and I have contacted support for assistance without results.</p> <p>This issue impacts my business to provide third-party Windows software.</p> <p>Kind Regards,<br> Mounir Idrassi<br> <code>insert your business(es) name(s)</code><br> <code>insert your business email</code><br> <code>insert your business phone number and any alternate phone numbers</code><br> <code>insert your business address</code><br> Time zone : <code>JST (UTC+9)</code></p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul> <ul>  <li> <div id=\"fb2f\"> <div> <p> <img src=\"https://a.fsdn.com/con/images/sandiego/icons/default-avatar.png\" alt=\"Preguntar Jeeves\" title=\"Preguntar Jeeves\"> </p> <div> <div><p>First, I doubt (hope not!) the account is actually deleted, it's most likely just disabled/turned off. Your accounr is just in the recycle bin... :)</p> <p>I would email people in the crypto community/other devs who would know someone at MS - bruce schneir, chris titus, niels ferguson; or are someone at MS - karen easterbrook, nathan ide.</p> <p>I would also contact media - tom's hardware, ars techica, wired, the intercept, EFF. there are a bunch of reddit subs that would apply - both in the privacy and crypto areas. also the guys from the all-in podcast and elon musk.</p> <p>I would also contact the offices of the following US congressmen and ask for help: Reps: Tom Massie and Senators: Rand Paul, Ron Wyden. Even though you're not a US citizen, there is a compelling privacy interest for US citizens who are the bulk (guessing) of the users of this software.</p></div>&#160; <p><small>Last edit: Preguntar Jeeves 2 days ago</small> </p></div> </div> <ul> <li> </li> </ul> </div>  <ul> </ul>  </li>  </ul> </div> <hr> <p> <a href=\"https://sourceforge.net/auth/\">Log in</a> to post a comment. </p> </div> </div> </div> </section> <div id=\"monb-sticky\" data-nosnippet> <p><img alt=\"MongoDB Logo\" src=\"https://a.fsdn.com/con/img/hi-logo.png\"> <img alt=\"MongoDB\" src=\"https://a.fsdn.com/con/img/hi-text.png\"> </p> </div> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fsourceforge.net%2Fp%2Fveracrypt%2Fdiscussion%2Fgeneral%2Fthread%2F9620d7a4b3%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"","image":"https://a.fsdn.com/con/images/sandiego/sf-logo-full.svg","authors":[{"name":null,"url":"https://sourceforge.net/p/veracrypt/discussion/general/thread/9620d7a4b3/","avatar":"https://a.fsdn.com/con/img/sandiego/logo-180x180.png"}],"id":"47686549","url":"https://sourceforge.net/p/veracrypt/discussion/general/thread/9620d7a4b3/","external_url":"https://news.ycombinator.com/item?id=47686549","date_published":"2026-04-08T07:23:39Z"},{"title":"France pulls last gold held in US","content_html":"<div class=\"page\" id=\"readability-page-1\"><p>Stock image. </p><div> <p>The Bank of France (BdF) says it has pulled the remaining gold held in New York and replaced it with a similar amount of gold bars in its vaults in Paris.</p> <p>The gold amounted to&#160;129 tonnes — or about 5% of the bank’s total holdings, according to the bank’s press release <a href=\"https://www.banque-france.fr/en/press-release/net-profit-eur-81-billion-enabling-clearing-losses-carried-forward\">issued last week</a>.</p> <p>France, one of the world’s leading gold holders, has been storing some of its bullion with the Federal Reserve Bank of New York since the late 1920s.</p> <p>However, an operation to repatriate its gold holdings began in the 1960s leading up to the US termination of the Bretton Woods system, which effectively stopped foreign governments from exchanging dollars for gold.</p> <p>Despite that, France still held a small portion of its gold with the Reserve Bank of New York.</p> <h2 id=\"h-gold-reserve-upgrade\">Gold reserve upgrade</h2> <p>Over the past 20 years , the BdF has also been replacing its “older” or “non‑standard” gold holdings — such as those in New York — with bars that meet ​modern international standards.</p> <p>Under the recommendation of a 2024 internal audit, the bank went ahead to replace the US-held gold between July 2025 and January 2026. But instead of refining and transporting the gold, it opted to sell the bars and purchase new bullion in Europe.</p> <p>BdF Governor ⁠Francois Villeroy de Galhau said the decision to keep the new bars in Paris is “not politically motivated,” as the higher-standard gold bars it bought were traded on a European market.</p> <p>Due to rising gold prices, the move helped the bank to generate a capital gain of €13 billion ($15 billion), bringing it to a net profit of €8.1 billion for the 2025 financial year after a net loss of €7.7 billion in 2024.</p> <p>The overall size of France’s gold reserves still remained unchanged at roughly ​2,437 tonnes, which are now entirely held at the BdF’s underground vault in La Souterraine.</p> <p>The French central bank still ​has 134 tonnes of gold to bring up to standard, which it aims to ​do by 2028.</p> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.mining.com%2Ffrance-pulls-last-gold-held-in-us-for-15b-gain%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"The overall size of France's gold reserves still remained unchanged at roughly ​2,437 tonnes.","image":"https://www.mining.com/wp-content/uploads/2026/04/AdobeStock_748732093_Editorial_Use_Only-e1775408649344.jpeg","authors":[{"name":"MINING.COM","url":"https://www.mining.com/france-pulls-last-gold-held-in-us-for-15b-gain/","avatar":"https://www.mining.com/wp-content/themes/miningdotcom/images/favicon/android-icon-192x192.png"}],"id":"47658146","url":"https://www.mining.com/france-pulls-last-gold-held-in-us-for-15b-gain/","external_url":"https://news.ycombinator.com/item?id=47658146","date_published":"2026-04-06T08:03:43Z"},{"id":"47672818","url":"https://idiocracy.wtf/","external_url":"https://news.ycombinator.com/item?id=47672818","title":"Are We Idiocracy Yet?","date_published":"2026-04-07T09:57:39Z","content_html":"Failed to fetch https://idiocracy.wtf/: 400 Bad Request"},{"title":"The cult of vibe coding is dogfooding run amok","content_html":"<div class=\"page\" id=\"readability-page-1\"><div><article><div dir=\"auto\"><p><span>Claude had a leak of their source code, and </span><a href=\"https://neuromatch.social/@jonny/116325668039992121\" rel>people have been having a whole lot of fun laughing at how bad it is</a><span>. You might wonder how this could happen. The answer is dogfooding run amok.</span></p><p>Dogfooding is when you use your own product. It’s a good idea. But it can turn into a cult activity where it goes beyond any reasonable limits. In this case, the idea is vibe coding, where you make a point of literally making no contribution to what’s going on under the hood, not even looking at it.</p><p>This is, of course, ridiculous. It’s not like there isn’t human contribution happening here. For starters, you’re using a human language, and the machine is using that same human language for its own internal thought processes. You could argue that that other humans, not on the development team, did all that foundational work and your team are doing pure vibe coding. But even that isn’t what’s happening. You’re still building the infrastructure of things like plan files (That’s fancy talk for ‘todo lists’), skills, and rules. The machine works very poorly without being given a framework.</p><p>So pure vibe coding is a myth. But they’re still trying to do it, and this leads to some very ridiculous outcomes. For example, a human actually looked and saw a lot of duplication between them. Now, you might ask: why didn’t any of the developers just go look for themselves? Again, it’s vibe coding. Looking under the hood is cheating. You’re only supposed to have vague conversations with the machine about what it’s doing.</p><p>This gets particularly silly because it’s not like there’s some super technical thing under the hood that the general public couldn’t understand. This code is written in English. Anyone could read it. It’s easy enough to go through and notice, “wow, there’s a whole bunch of things that are both agents and tools. That’s kind of redundant, maybe we should clean this up.”</p><p>This happens all the time in software. Projects are born in sin. Historically a software project would usually have so much tech debt that if you were doing what made sense from a pure development standpoint you would literally do nothing but clean up mess for the entire next year. Now that you can use AI for coding, you can get that cleanup done in sometimes a matter of weeks, or get it paid down a bit slower will still writing new features. And you should. You should strive for much higher quality. Helping you clean up mess is something AI is actually very good at.</p><p>In this particular case, a human could have told the machine: “There’s a lot of things that are both agents and tools. Let’s go through and make a list of all of them, look at some examples, and I’ll tell you which should be agents and which should be tools. We’ll have a discussion and figure out the general guidelines. Then we’ll audit the entire set, figure out which category each one belongs in, port the ones that are in the wrong type, and for the ones that are both, read through both versions and consolidate them into one document with the best of both.”</p><p>The AI is actually very good at this, especially if you have a conversation with it beforehand. That’s what Ask mode is for. You walk through some examples, share your reasoning, and correct the wrong things it says when trying to sycophantically agree with you. After enough back and forth, it’s often able to do what looks like one-shotting a task. It’s not really one-shotting at all. There was a lot of back and forth with you, the human, beforehand. But when it actually goes to do the thing, it zooms ahead because you’ve already clarified the weird edge cases and the issues likely to come up.</p><p>But the Claude team isn’t doing that. They’re going completely overboard with dogfooding and utterly refusing to even spend a few minutes looking under the hood, noticing what’s broken, and explaining the mess to the machine. That wouldn’t even be a big violation of the vibe coding concept. You’re reading the innards a little but you’re only giving high-level, conceptual, abstract ideas about how problems should be solved. The machine is doing the vast majority, if not literally all, of the actual writing.</p><p>I’ve been doing this for months. I’ll start a conversation by saying “Let’s audit this codebase for unreachable code,” or “This function makes my eyes bleed,” and we’ll have a conversation about it until something actionable comes up. Then I explain what I think should be done and we’ll keep discussing it until I stop having more thoughts to give and the machine stops saying stupid things which need correcting. Then I tell it to make a plan and hit build. This is my life. The AI is very bad at spontaneously noticing, “I’ve got a lot of spaghetti code here, I should clean it up.” But if you tell it this has spaghetti code and give it some guidance (or sometimes even without guidance) it can do a good job of cleaning up the mess.</p><p>You don’t have to have poor quality software just because you’re using AI for coding. That is my hot take for today. People have bad quality software because they decide to have bad quality software. I have been screaming at my computer this past week dealing with a library that was written by overpaid meatbags with no AI help. Bad software is a decision you make. You need to own it. You should do better.</p></div></article></div><div><div id=\"discussion\"><h4>Discussion about this post</h4></div><div><h3>Ready for more?</h3></div></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fbramcohen.com%2Fp%2Fthe-cult-of-vibe-coding-is-insane\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"Bad software is a choice you make","image":"https://substackcdn.com/image/fetch/$s_!93xj!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fbramcohen.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D-1733548744%26version%3D9","authors":[{"name":"Bram’s Thoughts","url":"https://bramcohen.com/p/the-cult-of-vibe-coding-is-insane","avatar":"https://substackcdn.com/image/fetch/$s_!wuwJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F219deb71-aa9d-4c7d-8b3e-ad02b7aabf75%2Ffavicon.ico"}],"id":"47664912","url":"https://bramcohen.com/p/the-cult-of-vibe-coding-is-insane","external_url":"https://news.ycombinator.com/item?id=47664912","date_published":"2026-04-06T18:31:03Z"},{"id":"47677853","url":"https://z.ai/blog/glm-5.1","external_url":"https://news.ycombinator.com/item?id=47677853","title":"GLM-5.1: Towards Long-Horizon Tasks","date_published":"2026-04-07T16:32:15Z"},{"title":"US and Iran agree to provisional ceasefire","content_html":"<div class=\"page\" id=\"readability-page-1\"><div id=\"maincontent\"><p>The US and Iran agreed to a two-week conditional ceasefire on Tuesday evening, which included a temporary reopening of the strait of Hormuz, after a last-minute diplomatic intervention led by Pakistan, canceling an ultimatum from <a href=\"https://www.theguardian.com/us-news/donaldtrump\" data-link-name=\"in body link\">Donald Trump</a> for Iran to surrender or face widespread destruction.</p><p>Trump’s announcement of the ceasefire agreement came less than two hours before the US president’s self-imposed 8pm Eastern time deadline to bomb Iran’s power plants and bridges in a move that legal scholars, as well as officials from numerous countries and the pope, had warned could constitute war crimes.</p><p>Just hours earlier, Trump had written on Truth Social: “A whole civilization will die tonight, never to be brought back again. I don’t want that to happen, but it probably will.” American B-52 bombers were reported to be en route to Iran before the ceasefire agreement was announced.</p><figure id=\"b5dbfc13-d865-4d35-a0cb-f7bcc9048cf8\" data-spacefinder-role=\"richLink\" data-spacefinder-type=\"model.dotcomrendering.pageElements.RichLinkBlockElement\"><gu-island name=\"RichLinkComponent\" priority=\"feature\" deferuntil=\"idle\" props=\"{&quot;richLinkIndex&quot;:3,&quot;element&quot;:{&quot;_type&quot;:&quot;model.dotcomrendering.pageElements.RichLinkBlockElement&quot;,&quot;prefix&quot;:&quot;Related: &quot;,&quot;text&quot;:&quot;Explainer: What is in Iran’s 10-point ceasefire plan and will the US agree to it?&quot;,&quot;elementId&quot;:&quot;b5dbfc13-d865-4d35-a0cb-f7bcc9048cf8&quot;,&quot;role&quot;:&quot;richLink&quot;,&quot;url&quot;:&quot;https://www.theguardian.com/world/2026/apr/08/iran-10-point-plan-ceasefire-donald-trump-us&quot;},&quot;ajaxUrl&quot;:&quot;https://api.nextgen.guardianapps.co.uk&quot;,&quot;format&quot;:{&quot;design&quot;:0,&quot;display&quot;:0,&quot;theme&quot;:0}}\"></gu-island></figure><p>But by Tuesday evening, Trump announced that a ceasefire agreement had been mediated through Pakistan, whose prime minister, Shehbaz Sharif, had requested the two-week peace in order to “allow diplomacy to run its course”.</p><p>Trump wrote in a post that “subject to the Islamic Republic of Iran agreeing to the COMPLETE, IMMEDIATE, and SAFE OPENING of the <a href=\"https://www.theguardian.com/world/strait-of-hormuz\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\">Strait of Hormuz</a>, I agree to suspend the bombing and attack of Iran for a period of two weeks”.</p><p>In a separate post later, the US president called Tuesday “a big day for world peace” on a social media post, claiming that Iran had “had enough”. He said the US would be “helping with the traffic buildup” in the strait of Hormuz and that “big money will be made” as Iran begins reconstruction.</p><p>For several hours afterwards, Israel’s position or agreement with the deal was unclear. But just before midnight ET, the prime minister, <a href=\"https://www.theguardian.com/world/benjamin-netanyahu\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\">Benjamin Netanyahu</a>, said Israel backed the US ceasefire with Iran but that the deal did not cover fighting against Hezbollah in Lebanon. His office said Israel also supported US efforts to ensure Iran no longer posed a nuclear or missile threat.</p><p>Pakistan’s prime minister had previously said that the agreed-upon ceasefire covered “everywhere including Lebanon”.</p><figure id=\"a1215f29-6007-4a8c-8350-82097880f9a9\" data-spacefinder-role=\"inline\" data-spacefinder-type=\"model.dotcomrendering.pageElements.YoutubeBlockElement\"><gu-island name=\"YoutubeBlockComponent\" priority=\"critical\" deferuntil=\"visible\" props=\"{&quot;id&quot;:&quot;982d6e6c-5546-437d-b74d-bfb3025c1fd8&quot;,&quot;assetId&quot;:&quot;lDia438Guok&quot;,&quot;index&quot;:9,&quot;format&quot;:{&quot;design&quot;:0,&quot;display&quot;:0,&quot;theme&quot;:0},&quot;isMainMedia&quot;:false,&quot;expired&quot;:false,&quot;posterImage&quot;:&quot;https://media.guim.co.uk/ceb12b5a8c7e4c01022ffba8b9f9c1c4df5971be/0_205_7813_4395/7813.jpg&quot;,&quot;duration&quot;:52,&quot;mediaTitle&quot;:&quot;JD Vance warns Iran to act in good faith in 'fragile' ceasefire – video&quot;,&quot;origin&quot;:&quot;https://www.theguardian.com&quot;,&quot;stickyVideos&quot;:false,&quot;enableAds&quot;:true,&quot;iconSizeOnDesktop&quot;:&quot;large&quot;,&quot;iconSizeOnMobile&quot;:&quot;large&quot;,&quot;hidePillOnMobile&quot;:false}\"><div data-chromatic=\"ignore\"><figcaption data-spacefinder-role=\"inline\"><span><svg width=\"36\" height=\"23\" viewbox=\"0 0 36 23\"><path d=\"M3.2 0l-3.2 3.3v16.4l3.3 3.3h18.7v-23h-18.8m30.4 1l-8.6 8v5l8.6 8h2.4v-21h-2.4\"/></svg></span><span>JD Vance warns Iran to act in good faith in 'fragile' ceasefire – video</span></figcaption></div></gu-island></figure><p>The ceasefire process was clouded in uncertainty after Iran released two different versions of the 10-point plan intended to be the basis for negotiations, and which Trump said was a “workable basis on which to negotiate”.</p><p>In the version released in Farsi, Iran included the phrase “acceptance of enrichment” for its nuclear program. But for reasons that remain unclear, that phrase was missing in English versions shared by Iranian diplomats to journalists.</p><p>Pakistan has invited the US and Iran to talks in Islamabad on Friday. Tehran said it would attend, but Washington has yet to publicly accept the invitation.</p><p>In a telephone call with Agence France-Presse, Trump said he believed China had persuaded Iran to negotiate, and said Tehran’s enriched uranium would be “perfectly taken care of”, without providing more detail.</p><p>In the two-week ceasefire, Trump said, he believed the US and Iran could negotiate over the 10-point proposal that would allow an armistice to be “finalized and consummated”.</p><p>“This will be a double sided CEASEFIRE!” he continued. “The reason for doing so is that we have already met and exceeded all Military objectives, and are very far along with a definitive Agreement concerning Longterm PEACE with Iran, and PEACE in the Middle East.”</p><p>Iran’s foreign minister, Abbas Araghchi, <a href=\"https://x.com/araghchi/status/2041655156215799821\" data-link-name=\"in body link\">issued a statement</a> shortly after Trump’s announcement saying Iran had agreed to the ceasefire. “For a period of two weeks, safe passage through the Strait of Hormuz will be possible via coordinating with Iran’s Armed Forces,” he wrote.</p><figure id=\"7a0f3f47-2754-404b-9619-3626d3c19013\" data-spacefinder-role=\"inline\" data-spacefinder-type=\"model.dotcomrendering.pageElements.YoutubeBlockElement\"><gu-island name=\"YoutubeBlockComponent\" priority=\"critical\" deferuntil=\"visible\" props=\"{&quot;id&quot;:&quot;c819a12d-3c30-47eb-9348-ecd7f9a3c00c&quot;,&quot;assetId&quot;:&quot;JUA3t85iBp4&quot;,&quot;index&quot;:17,&quot;format&quot;:{&quot;design&quot;:0,&quot;display&quot;:0,&quot;theme&quot;:0},&quot;isMainMedia&quot;:false,&quot;expired&quot;:false,&quot;posterImage&quot;:&quot;https://media.guim.co.uk/08bafc1ed54870cccf5ddc8dd6052a60eac9e79a/0_231_5500_3094/5500.jpg&quot;,&quot;duration&quot;:52,&quot;mediaTitle&quot;:&quot;Jubilation on streets of Tehran as Iran and US agree two-week ceasefire – video&quot;,&quot;origin&quot;:&quot;https://www.theguardian.com&quot;,&quot;stickyVideos&quot;:false,&quot;enableAds&quot;:true,&quot;iconSizeOnDesktop&quot;:&quot;large&quot;,&quot;iconSizeOnMobile&quot;:&quot;large&quot;,&quot;hidePillOnMobile&quot;:false}\"><div data-chromatic=\"ignore\"><figcaption data-spacefinder-role=\"inline\"><span><svg width=\"36\" height=\"23\" viewbox=\"0 0 36 23\"><path d=\"M3.2 0l-3.2 3.3v16.4l3.3 3.3h18.7v-23h-18.8m30.4 1l-8.6 8v5l8.6 8h2.4v-21h-2.4\"/></svg></span><span>Jubilation on streets of Tehran as Iran and US agree two-week ceasefire – video</span></figcaption></div></gu-island></figure><p><a href=\"https://www.theguardian.com/world/2026/apr/08/oil-prices-stock-today-futures-crude-donald-trump-iran-ceasefire\" data-link-name=\"in body link\">Oil prices dived, stocks surged</a> and the dollar was knocked back on Wednesday as a two-week Middle East ceasefire sparked a relief rally, fueled by hopes that oil and gas flows through the strait of Hormuz could resume.</p><p>Despite the provisional ceasefire, attacks continued across the region in the hours after Trump’s announcement. Before the deadline, airstrikes hit two bridges and a train station in Iran, and the US hit military infrastructure on Kharg Island, a key hub for Iranian oil production.</p><p>The sudden about-face will allow Trump to step back as the US war in Iran has dragged on for five weeks with little sign that Tehran is ready to surrender or release its hold on the strait, a conduit for a fifth of the global energy supply, where traffic has slowed to a trickle.</p><p>Trump had earlier rejected the 10-point plan as “not good enough” but the president has set deadlines before and allowed them to pass over the five weeks of the conflict. Yet he insisted on Tuesday the ensuing hours would be “one of the most important moments in the long and complex history of the World” unless “something revolutionarily wonderful” happened, with “less radicalized minds” in Iran’s leadership.</p><p>News of the provisional ceasefire deal was welcomed but with a note of caution elsewhere.</p><p>Iraq’s foreign ministry called for “serious and sustainable dialogue” between the US and <a href=\"https://www.theguardian.com/world/iran\" data-link-name=\"in body link\">Iran</a> “to address the root causes of the disputes”, while the German foreign minister, Johann Wadephul, said the deal “must be the crucial first step towards lasting peace, for the consequences of the war continuing would be incalculable”.</p><p>In Australia, the government warned that the latest developments would <a href=\"https://www.theguardian.com/australia-news/2026/apr/08/petrol-prices-rise-australia-iran-ceasefire-cheaper-fuel\" data-link-name=\"in body link\">not necessarily mean the fuel crisis is over.</a> Oil prices fell as traders bet that the reopening of the strait of Hormuz would help fuel supply resume, but the energy minister, Chris Bowen, told reporters Australians should “not get ahead of ourselves”.</p><p>He said: “People shouldn’t take today’s progress and expect prices to fall. We welcome progress, but I don’t think we can say the [strait of Hormuz is] now open.”</p><p>A spokesperson for New Zealand’s foreign minister, Winston Peters, welcomed the “encouraging news” but noted “there remains significant important work to be done to secure a lasting ceasefire”.</p><p>Japan said it expected the move to result in a “final agreement” after Washington and Tehran begin talks on Friday. Describing the ceasefire as a “positive move”, the chief cabinet secretary, Minoru Kihara, said Tokyo wanted to see a de-escalation on the ground in the region, adding that the prime minister, Sanae Takaichi, was seeking talks with the Iranian president, Masoud Pezeshkian.</p><p>A temporary end to hostilities will come as a relief to Japan, which depends on the Middle East for about 90% of its crude oil imports, most of which is transported through the strait of Hormuz.</p><p>South Korea’s ministry of foreign affairs said it hoped “negotiations between the two sides will be successfully concluded and that peace and stability in the Middle East will be restored at an early date”, as well as wishes for “free and safe navigation of all vessels through the strait of Hormuz”.</p></div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.theguardian.com%2Fus-news%2F2026%2Fapr%2F07%2Ftrump-iran-war-ceasefire\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"US president abandons threat for Iran to surrender or face destruction with last-minute intervention led by Pakistan","image":"https://i.guim.co.uk/img/media/7cde4a66207fd4d8c625b447df29716f6e1a7235/576_0_4582_3666/master/4582.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&precrop=40:21,offset-x50,offset-y0&overlay-align=bottom%2Cleft&overlay-width=100p&overlay-base64=L2ltZy9zdGF0aWMvb3ZlcmxheXMvdGctZGVmYXVsdC5wbmc&enable=upscale&s=b2a8729bf6c3d7f318035c7c52fbec3a","authors":[{"name":"the Guardian","url":"https://www.theguardian.com/us-news/2026/apr/07/trump-iran-war-ceasefire","avatar":"https://static.guim.co.uk/images/favicon-32x32.ico"}],"id":"47682276","url":"https://www.theguardian.com/us-news/2026/apr/07/trump-iran-war-ceasefire","external_url":"https://news.ycombinator.com/item?id=47682276","date_published":"2026-04-07T22:41:02Z"},{"title":"A cryptography engineer's perspective on quantum computing timelines","content_html":"<div class=\"page\" id=\"readability-page-1\"><article> <time datetime=\"2026-04-06\"> 6 Apr 2026</time> <section>  <p>My position on the urgency of rolling out quantum-resistant cryptography has changed compared to just a few months ago. You might have heard this privately from me in the past weeks, but it’s time to signal and justify this change of mind publicly.</p> <p>There had been rumors for a while of expected and unexpected progress towards cryptographically-relevant quantum computers, but over the last week we got two public instances of it.</p> <p>First, <a href=\"https://research.google/blog/safeguarding-cryptocurrency-by-disclosing-quantum-vulnerabilities-responsibly/\">Google published a paper revising down dramatically the estimated number of logical qubits and gates required to break 256-bit elliptic curves</a> like NIST P-256 and secp256k1, which makes the attack doable in minutes on fast-clock architectures like superconducting qubits. They weirdly<sup id=\"fnref:goofy\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:goofy\">1</a></sup> frame it around cryptocurrencies and mempools and salvaged goods or something, but the far more important implication are practical WebPKI MitM attacks.</p> <p>Shortly after, <a href=\"https://arxiv.org/abs/2603.28627\">a different paper came out from Oratomic showing 256-bit elliptic curves can be broken in as few as 10,000 physical qubits if you have non-local connectivity</a>, like neutral atoms seem to offer, thanks to better error correction. This attack would be slower, but even a single broken key per month can be catastrophic.</p> <p>They have this excellent graph on page 2 (<em>Babbush et al.</em> is the Google paper, which they presumably had preview access to):</p> <p><img alt=\"graph of physical qubit cost over time\" src=\"https://assets.buttondown.email/images/c768727d-01a9-4f44-919b-bab3c84cb81d.png?w=960&fit=max\"></p> <p>Overall, it looks like everything is moving: the hardware is getting better, the algorithms are getting cheaper, the requirements for error correction are getting lower.</p> <p>I’ll be honest, I don’t actually know what all the physics in those papers means. That’s not my job and not my expertise. My job includes risk assessment on behalf of the users that entrusted me with their safety. What I know is what at least some actual experts are telling us.</p> <p>Heather Adkins and Sophie Schmieg <a href=\"https://blog.google/innovation-and-ai/technology/safety-security/cryptography-migration-timeline/\">are telling us</a> that “quantum frontiers may be closer than they appear” and that <strong>2029</strong> is their deadline. That’s in 33 months, and no one had set such an aggressive timeline until this month.</p> <p>Scott Aaronson <a href=\"https://scottaaronson.blog/?p=9425\">tells us</a> that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940.</p> <p>The timelines presented at RWPQC 2026, just a few weeks ago, were much tighter than a couple years ago, and are already partially obsolete. The joke used to be that quantum computers have been 10 years out for 30 years now. Well, not true anymore, the timelines have started progressing.</p> <p>If you are thinking “well, this could be bad, or it could be nothing!” I need you to recognize how <strong>immediately dispositive</strong> that is. The bet is not “are you 100% sure a CRQC will exist in 2030?”, the bet is “are you 100% sure a CRQC will NOT exist in 2030?” I simply don’t see how a non-expert can look at what the experts are saying, and decide “I know better, there is in fact &lt; 1% chance.” Remember that you are betting with your users’ lives.<sup id=\"fnref:audience\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:audience\">2</a></sup></p> <p>Put another way, even if the most likely outcome was no CRQC in our lifetimes, that would be completely irrelevant, because our users don’t want just better-than-even odds<sup id=\"fnref:odds\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:odds\">3</a></sup> of being secure.</p> <p>Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments <a href=\"https://bas.westerbaan.name/notes/2026/04/02/factoring.html\">betray a lack of expertise</a>. As Scott Aaronson <a href=\"https://scottaaronson.blog/?p=9665#comment-2029013\">said</a>:</p> <blockquote> <p>Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”</p> </blockquote> <p>The job is not to be skeptical of things we’re not experts in, the job is to mitigate credible threats, and there are credible experts that are telling us about an imminent threat.</p> <p>In summary, it might be that in 10 years the predictions will turn out to be wrong, but at this point they might also be right soon, and that risk is now unacceptable.</p> <h2 id=\"now-what\">Now what</h2> <p>Concretely, what does this mean? It means we need to ship.</p> <p>Regrettably, we’ve got to roll out what we have.<sup id=\"fnref:lattices\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:lattices\">4</a></sup> That means <strong>large ML-DSA signatures</strong> shoved in places designed for small ECDSA signatures, like X.509, with the exception of Merkle Tree Certificates for the WebPKI, which is thankfully <a href=\"https://security.googleblog.com/2026/02/cultivating-robust-and-efficient.html\">far enough along</a>.</p> <p>This is <em>not</em> the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.</p> <p>For key exchange, the migration to ML-KEM is going well enough but:</p> <ol> <li> <p>Any <strong>non-PQ key exchange</strong> should now be considered a potential active compromise, worthy of warning the user <a href=\"https://www.openssh.org/pq.html\">like OpenSSH does</a>, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years.</p> </li> <li> <p>We need to forget about <strong>non-interactive key exchanges (NIKEs)</strong> for a while; we only have KEMs (which are only unidirectionally authenticated without interactivity) in the PQ toolkit.</p> </li> </ol> <p>It makes no more sense to deploy <strong>new schemes that are not post-quantum</strong>. I know, pairings were nice. I know, everything PQ is annoyingly large. I know, we had basically <em>just</em> figured out how to do ECDSA over P-256 safely. I know, there might not be practical PQ equivalents for threshold signatures or identity-based encryption. Trust me, I know it stings. But it is what it is.</p> <p><strong>Hybrid classic + post-quantum authentication makes no sense</strong> to me anymore and will only slow us down; we should go straight to pure ML-DSA-44.<sup id=\"fnref:44\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:44\">6</a></sup> Hybrid key exchange is reasonably easy, with ephemeral keys that don’t even need a type or wire format for the composite private key, and a couple years ago it made sense to take the hedge. Authentication is not like that, and even with <a href=\"https://www.ietf.org/archive/id/draft-ietf-lamps-pq-composite-sigs-15.html\">draft-ietf-lamps-pq-composite-sigs-15</a> with its 18 composite key types nearing publication, we’d waste precious time collectively figuring out how to treat these composite keys and how to expose them to users. It’s also been two years since Kyber hybrids and we’ve gained significant confidence in the Module-Lattice schemes. Hybrid signatures cost time and complexity budget,<sup id=\"fnref:poor\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:poor\">5</a></sup> and the only benefit is protection if ML-DSA is classically broken <em>before the CRQCs come</em>, which looks like the wrong tradeoff at this point.</p> <p>In <strong>symmetric encryption</strong>, we don’t need to do anything, thankfully. There is a common misconception that protection from Grover requires 256-bit keys, but <a href=\"https://words.filippo.io/post-quantum-age/#128-bits-are-enough\">that is based on an exceedingly simplified understanding of the algorithm</a>. A more accurate characterization is that with a circuit depth of 2⁶⁴ logical gates (the approximate number of gates that current classical computing architectures can perform serially in a decade) running Grover on a 128-bit key space would require a circuit size of 2¹⁰⁶. There’s been no progress on this that I am aware of, and indeed there are old proofs that <a href=\"https://arxiv.org/abs/quant-ph/9711070\">Grover is optimal and its quantum speedup doesn’t parallelize</a>. Unnecessary 256-bit key requirements are harmful when bundled with the actually urgent PQ requirements, because they muddle the interoperability targets and they risk slowing down the rollout of asymmetric PQ cryptography.</p> <p>In my corner of the world, we’ll have to start thinking about what it means for half the <strong>cryptography packages in the Go standard library</strong> to be suddenly insecure, and how to balance the risk of downgrade attacks and backwards compatibility. It’s the first time in our careers we’ve faced anything like this: SHA-1 to SHA-256 was not nearly this disruptive,<sup id=\"fnref:sha1\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:sha1\">7</a></sup> and even that took forever with the occasional unexpected downgrade attack.</p> <p><strong>Trusted Execution Environments (TEEs)</strong> like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f***d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon. I had to reassess a whole project because of this, and I will probably downgrade them to barely “defense in depth” in my toolkit.</p> <p><strong>Ecosystems with cryptographic identities</strong> (like <a href=\"https://atproto.com/\">atproto</a> and, yes, cryptocurrencies) need to start migrating very soon, because if the CRQCs come before they are <em>done</em>, they will have to make extremely hard decisions, picking between letting users be compromised and bricking them.</p> <p>File encryption is especially vulnerable to store-now-decrypt-later attacks, so we’ll probably have to start warning and then erroring out on non-PQ <strong>age recipient types</strong> soon. It’s unfortunately only been a few months since we even added PQ recipients, in <a href=\"https://github.com/FiloSottile/age/releases/tag/v1.3.0\">version 1.3.0</a>.<sup id=\"fnref:ietf\"><a href=\"https://words.filippo.io/crqc-timeline/#fn:ietf\">8</a></sup></p> <p>Finally, this week I started <strong>teaching</strong> a PhD course in cryptography at the University of Bologna, and I’m going to mention RSA, ECDSA, and ECDH only as legacy algorithms, because that’s how those students will encounter them in their careers. I know, it feels weird. But it is what it is.</p> <p>For more willing-or-not PQ migration, follow me on Bluesky at <a href=\"https://bsky.app/profile/filippo.abyssdomain.expert\">@filippo.abyssdomain.expert</a> or on Mastodon at <a href=\"https://abyssdomain.expert/@filippo\">@filippo@abyssdomain.expert</a>.</p> <h2 id=\"the-picture\">The picture</h2> <p>Traveling back from an excellent <a href=\"https://atmosphereconf.org/\">AtmosphereConf 2026</a>, I saw my first aurora, from the north-facing window of a Boeing 747.</p> <p><img alt=\"Aurora borealis seen from an airplane window, with green vertical columns and curtains of light above a cloud layer, stars visible in the dark sky above.\" src=\"https://assets.buttondown.email/images/e29aa9d6-cb9e-40ee-9340-a2d128ddaabf.jpeg?w=960&fit=max\"></p> <p>My work is made possible by <a href=\"https://geomys.org/\">Geomys</a>, an organization of professional Go maintainers, which is funded by <a href=\"https://www.avalabs.org/\">Ava Labs</a>, <a href=\"https://goteleport.com/\">Teleport</a>, <a href=\"https://tailscale.com/\">Tailscale</a>, and <a href=\"https://sentry.io/\">Sentry</a>. Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the <a href=\"https://words.filippo.io/geomys\">Geomys announcement</a>.) Here are a few words from some of them!</p> <p>Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. <a href=\"https://goteleport.com/platform/identity/?utm=filippo\">Teleport Identity</a> is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews.</p> <p>Ava Labs — We at <a href=\"https://www.avalabs.org/\">Ava Labs</a>, maintainer of <a href=\"https://github.com/ava-labs/avalanchego\">AvalancheGo</a> (the most widely used client for interacting with the <a href=\"https://www.avax.network/\">Avalanche Network</a>), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team.</p> </section> </article></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwords.filippo.io%2Fcrqc-timeline%2F\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"The risk that cryptographically-relevant quantum computers materialize within the next few years is now high enough to be dispositive, unfortunately.","image":"https://assets.buttondown.email/images/e29aa9d6-cb9e-40ee-9340-a2d128ddaabf.jpeg?w=960&fit=max","authors":[{"name":null,"url":"https://words.filippo.io/crqc-timeline/"}],"id":"47662234","url":"https://words.filippo.io/crqc-timeline/","external_url":"https://news.ycombinator.com/item?id=47662234","date_published":"2026-04-06T15:31:20Z"},{"title":"Battle for Wesnoth: open-source, turn-based strategy game","content_html":"<div class=\"page\" id=\"readability-page-1\"><div id=\"main\"> <div role=\"banner\" id=\"homebg1\"> <ul id=\"navlinks\"> <li><a href=\"https://www.wesnoth.org/\">Home</a></li> <li><a href=\"https://forums.wesnoth.org/viewforum.php?f=62\">News</a></li> <li><a href=\"https://wiki.wesnoth.org/Play\">Play</a></li> <li><a href=\"https://wiki.wesnoth.org/Create\">Create</a></li> <li><a href=\"https://forums.wesnoth.org/\">Forums</a></li> <li><a href=\"https://wiki.wesnoth.org/Project\">About</a></li> </ul> </div> <div id=\"homebg2\"> <div id=\"description\"> <p><cite>The Battle for Wesnoth</cite> is an <a href=\"https://opensource.org/faq#osd\">open source</a>, turn-based strategy game with a high fantasy theme. It features both singleplayer and online/hotseat multiplayer combat.</p> <p>Explore the world of Wesnoth and take part in its many adventures! Embark on a desperate quest to reclaim your rightful throne... Flee the Lich Lords to a new home across the sea... Delve into the darkest depths of the earth to craft a jewel of fire itself... Defend your kingdom against the ravaging hordes of a foul necromancer... Or lead a straggly band of survivors across the blazing sands to confront an unseen evil.</p> <p id=\"description-trail\">The choice is up to you...</p> </div> <div id=\"showcase\"> <p id=\"showcase-current\"> <iframe id=\"showcase-object\" width=\"854\" height=\"480\" src=\"https://www.youtube.com/embed/4Ebww6utt9I\" frameborder=\"0\" allowfullscreen></iframe> </p> </div> <div id=\"features\"> <h2>Features</h2> <ul> <li>Units hand-animated in a vibrant pixel art style, with semi-realistic portraits used for dialog.</li> <li>17 singleplayer campaigns and 55 multiplayer maps to choose from.</li> <li>Over 200 unit types in seven major factions, all with distinctive abilities, weapons and spells.</li> <li>Face off against other players over the Internet, or challenge your friends over a private/local network or hot-seat.</li> <li>Translated into over 30 different languages.</li> <li>Highly moddable engine combining <a href=\"https://wiki.wesnoth.org/ReferenceWML\">WML</a> and <a href=\"https://www.lua.org/\">Lua</a> scripting</li> <li>Tons of player-made content available from the official add-ons server: new campaigns, factions, and multiplayer maps with new and unique mechanics and artwork.</li> <li>Cross-platform compatible with Microsoft Windows, Apple macOS, and GNU/Linux.</li>  </ul> </div> </div>  <div id=\"homebg4\"> <div id=\"download\"> <h2>Download</h2>  <div id=\"stable\" data-version=\"1.18.6\" data-recommended> <h3>Stable</h3> <div> <div> <ul id=\"dlstable\"><li><a href=\"https://wesnoth.itch.io/battle-for-wesnoth\" data-os-label=\"Windows (64-bit)\"><span>Windows<br><span>(64-bit)</span></span></a></li><li><a href=\"https://wesnoth.itch.io/battle-for-wesnoth\" data-os-label=\"macOS (10.12+)\"><span>macOS<br><span>(10.12+)</span></span></a></li><li><a href=\"https://flathub.org/apps/details/org.wesnoth.Wesnoth\" data-os-label=\"Linux\"><span>Linux</span></a></li><li><a href=\"https://sourceforge.net/projects/wesnoth/files/wesnoth-1.18/wesnoth-1.18.6/wesnoth-1.18.6.tar.bz2/download\" data-os-label=\"Source\"><span>Source</span></a></li></ul> <ul><li><a href=\"https://forums.wesnoth.org/viewtopic.php?t=60359\">Update announcement</a></li><li><a href=\"https://www.wesnoth.org/start/1.18/\">Release notes for 1.18</a></li><li><a href=\"https://wiki.wesnoth.org/Download#Stable_.281.18_branch.29\">Checksums and other downloads</a></li></ul> </div> <div><p>The <b>stable</b> version of Wesnoth is recommended for new and veteran players and content creators on all platforms, as it offers a well-supported and extensively-tested experience, with new releases delivering bug fixes and translation updates.</p><p>Players can also obtain this version of Wesnoth from <a href=\"https://store.steampowered.com/app/599390\">Steam</a> and the <a href=\"https://apps.apple.com/us/app/the-battle-for-wesnoth/id1450738104\">Mac App Store</a>, with the added benefit of continuous automatic updates.</p><h4>System Requirements</h4> <figure> <table> <thead> <tr> <th></th> <th>Minimum</th> <th>Recommended</th> </tr> </thead> <tbody> <tr><th>System</th><td> Windows 10 1903 (64-bit) or later<br> macOS 10.12 or later<br> Ubuntu 20.04 or compatible</td><td> Windows 10 (64-bit) or later<br> macOS 10.14 or later<br> Ubuntu 22.04 or compatible</td></tr><tr><th>CPU</th><td>Dual-core 2.0 GHz or better</td><td>Dual-core 3.2 GHz or better</td></tr><tr><th>RAM</th><td>4 GB</td><td>4 GB</td></tr><tr><th>Disk</th><td>800 MB free</td><td>2 GB free</td></tr><tr><th>Graphics</th><td>800x600 or larger screen</td><td>1024x768 or larger screen</td></tr><tr><th>Input</th><td colspan=\"2\">Keyboard and mouse required</td></tr> </tbody> </table> </figure> </div> </div> </div><div id=\"dev\" data-version=\"1.19.22\"> <h3>Development</h3> <div> <div> <ul id=\"dldev\"><li><a href=\"https://wesnoth.itch.io/battle-for-wesnoth\" data-os-label=\"Windows (64-bit)\"><span>Windows<br><span>(64-bit)</span></span></a></li><li><a href=\"https://wesnoth.itch.io/battle-for-wesnoth\" data-os-label=\"macOS (10.13+)\"><span>macOS<br><span>(10.13+)</span></span></a></li><li><a href=\"https://wiki.wesnoth.org/WesnothBinariesLinux\" data-os-label=\"Linux\"><span>Linux</span></a></li><li><a href=\"https://sourceforge.net/projects/wesnoth/files/wesnoth/wesnoth-1.19.22/wesnoth-1.19.22.tar.bz2/download\" data-os-label=\"Source\"><span>Source</span></a></li><li><a href=\"https://f-droid.org/en/packages/org.wesnoth.Wesnoth/\" data-os-label=\"Android\"><span>Android</span></a></li></ul> <ul><li><a href=\"https://forums.wesnoth.org/viewtopic.php?t=60593\">Update announcement</a></li><li><a href=\"https://wiki.wesnoth.org/Download#Development_.281.19_branch.29\">Checksums and other downloads</a></li></ul> </div> <div><p> <strong>New players are advised to choose the stable version instead.</strong></p><p>The <b>development</b> version of Wesnoth is geared towards veteran players and content creators who wish to try out the latest additions to the game. Updates are not guaranteed to be stable and may include game-breaking changes.</p><p>Players can also obtain this version of Wesnoth from <a href=\"https://store.steampowered.com/app/599390\">Steam</a> by selecting it in the Betas tab in the game’s properties after installation, with the added benefit of continuous automatic updates.</p><h4>System Requirements</h4> <figure> <table> <thead> <tr> <th></th> <th>Minimum</th> <th>Recommended</th> </tr> </thead> <tbody> <tr><th>System</th><td> Windows 10 1903 (64-bit) or later<br> macOS 10.13 or later<br> Ubuntu 20.04 or compatible<br> Android 6</td><td> Windows 10 1903 (64-bit) or later<br> macOS 10.14 or later<br> Ubuntu 22.04 or compatible<br> Android 10+</td></tr><tr><th>CPU</th><td>Dual-core 2.0 GHz or better</td><td>Dual-core 3.2 GHz or better</td></tr><tr><th>RAM</th><td>4 GB</td><td>4 GB</td></tr><tr><th>Disk</th><td>2 GB free</td><td>2 GB free</td></tr><tr><th>Graphics</th><td colspan=\"2\">1280x720 or larger screen</td></tr><tr><th>Input</th><td colspan=\"2\">Keyboard and mouse required</td></tr> </tbody> </table> </figure> </div> </div> </div> </div> <div id=\"contribute\"> <h2>Contribute</h2><p>Wesnoth is made possible by the efforts of players and enthusiasts from all over the world. Whether it be by creating new add-on content, contributing patches for core content and the game engine, or just testing the development version, you too can help shape the next version of Wesnoth!</p> <ul> <li><a href=\"https://wiki.wesnoth.org/Create\">Introduction to Content Creation</a></li> <li><a href=\"https://wiki.wesnoth.org/Project\">About the Battle for Wesnoth Project</a></li> </ul> </div> <div id=\"donate\"> <h2>Donate</h2><div> <p id=\"donate-info\">If you would like to donate to the project, you can do so on Liberapay or when downloading Wesnoth through itch.io. Wesnoth does rely on the work of dedicated volunteers, but no project can function completely cost-free. Revenue from donations goes towards maintaining our servers, websites, and commissioning new art and music.</p> </div> </div> </div> </div></div>\n<img src=\"https://readable.news/api/telemetry?url=https%3A%2F%2Fwww.wesnoth.org\" width=\"1\" height=\"1\" alt=\"\">","excerpt":"The Battle for Wesnoth is an open source, turn-based strategy game with a high fantasy theme. It features both singleplayer and online/hotseat multiplayer combat.","authors":[{"name":null,"url":"https://www.wesnoth.org","avatar":"https://www.wesnoth.org/wesmere/img/favicon-32.png"}],"id":"47664186","url":"https://www.wesnoth.org","external_url":"https://news.ycombinator.com/item?id=47664186","date_published":"2026-04-06T17:37:38Z"}]}