"We never got to finish": Ex-Windows chief reveals Microsoft could have already improved Windows 11 by now — cutting memory and storage demands by 20%

The Hot Take: Yeah Microsoft and all companies could have done this long ago but now that RAM is priced through the roof is the only reason they're now looking at this. I wouldn't doubt all the browser folks are going to suddenly start looking at their usage here soon.

A former Windows leader recently discussed a project that promised a 20% reduction in Windows 11’s memory and storage usage.

Read the full article

Intel Arc Pro B70 Outclasses NVIDIA’s RTX Pro 4000 In AI At Half The Cost, 50% More Memory

The Hot Take: We need more competition, AMD seems to be very quiet lately and might come out of no where with a beast but they haven't yet. So intel coming back in even to do an Ai bubble grab it still helps us all. Especially when that bubble pops.

Intel's Arc Pro B70 is designed to offer accessible local inference for AI users, delivering more memory at half the price of the competition. Intel Arc Pro B70 vs NVIDIA RTX PRO 4000 Blackwell: 32 GB vs 24 GB, $949 vs $1800, More AI Context, 2x Tokens Per Dollar So we talked about the unveiling of the Intel Arc Pro B70 graphics card in our other post, where we highlighted the specifications, availability, and prices of the product. The B70 is going to be the flagship Pro & AI product from Intel within its Arc Pro stack, and they have […]Read full article at https://wccftech.com/intel-arc-pro-b70-outclasses-nvidia-rtx-pro-4000-in-ai-at-half-the-cost/

Read the full article

Samsung's 2nm yield surpasses 60%, tripling in 6-month span

The Hot Take: That's amazing! We need more manufacturers other than TSMC. I still hope they're looking to make a factory States side, so we don't have to rely on the one in S Korea.

Samsung Electronics has reportedly raised the yield of its 2nm wafer foundry process above 60%, a significant jump from around 20% in the second half of 2025. Industry analysts say this improvement not only cuts manufacturing costs but also boosts Samsung's chances of securing new orders.

Read the full article

Nvidia admits one GPU to rule them all was a fairy tale

The Hot Take: Nvidia starting to feel the heat of competition and see those $ evaporate as they try other vendors.

Nvidia is preparing to launch a new chip designed to speed up AI responses, breaking with its long-running habit of flogging the same processor for every job. Nvidia chief executive Jensen Huang is expected to unveil a chip focused on ā€œinferenceā€, meaning running models rather than training them. According to people familiar with the plans for GTC next week, the chip is the first new product to emerge from December’s $20bn deal to hire the founders of Groq, a start-up building ā€œlanguage processing unitsā€ tuned for high-speed answers to complex AI queries. Three months after that deal, Nvidia is expected to debut a Groq-based LPU to sit alongside its forthcoming flagship Vera Rubin graphics processing unit. It is part of a product family meant to head off challengers and meet new kinds of AI applications. The move lands as the world’s most valuable company gets grief from start-ups and customers, such as Google, all busy cooking up their own AI chips. This week, Meta announced a new family of four inference-focused processors. One Silicon Valley venture investor said: ā€œWe are entering an interesting phase that is not ā€˜Nvidia dominant’,ā€ For the past three years, Nvidia’s $4.5tn market capitalisation has been built on its GPUs, which have become the backbone of generative AI. They train models such as the ones behind OpenAI’s ChatGPT. Huang has insisted that a single system can handle training and then run the chatbots and coding tools built on top. Big Tech has spent hundreds of billions deploying these boxes while funding their own specialised silicon. But the growing sophistication of AI tools, including ā€œagenticā€ coding systems, is pushing Huang to ditch the mantra that one GPU fits every workload. The Groq deal was worth about $20bn, according to people familiar with the transaction, making it one of the biggest deals in Nvidia’s 33-year history. It includes licensing and the hiring of key talent, including Groq founder and former Google chip executive Jonathan Ross. Groq, which had been working with Samsung to manufacture its products, previously bragged that its LPUs were faster and more efficient than Nvidia’s GPUs for inference. Nvidia clearly listened. Nvidia’s flagship Blackwell and Rubin systems lean on high-bandwidth memory to cope with the massive data loads that AI models fling around. But HBM is expensive and in increasingly short supply as SK Hynix and Micron struggle to keep up with demand. The Groq-style chip will use SRam rather than the dynamic Ram used for HBM, according to people familiar with Nvidia’s plans, because SRam is more available and better suited to speeding up AI ā€œreasoningā€ tasks. Bank of America reckons that by 2030, inference will account for 75 per cent of AI data centre spending, up from about 50 per cent last year, and it expects a ā€œbroadened AI portfolioā€ at GTC. Ā 

Read the full article

11-month old Russian outfit claims it has developed 16-core and 32-core chips, flaunts Cyrillic-badged chips — chips appear to be sanctions-swerving rebadged Chinese Loongson processors

The Hot Take: Looks like we're going back to tech silos globally. We still have to address the unfair competition we have domestically more I think anyway.

Russia-based Tramplin Electronics obtains samples of Loongson's LS3C6000 processors with Cyrillic inscriptions, claims these are its own CPUs.

Read the full article

Microsoft: Removing some Copilots will improve Windows 11

The Hot Take: Finally listening to the customers? Nah, this is to quiet them just enough to continue moving to their goals.

'Doze boss admits quality is down, promises smaller memory footprint and fixes for many well-known issues Microsoft has acknowledged that it needs to improve the quality of Windows 11 and outlined its plan to get the job done.…

Read the full article

Elon Musk Announces $20B 'Terafab' Chip Plant in Texas To Supply His Companies

The Hot Take: US domestic chip manufacturing appears to be exploding. That's an insane goal, but to bad it's just for his companies.

"Billionaire Elon Musk has announced plans to build a $20 billion chip plant in Austin, Texas" reports a local news station: Musk announced on Saturday night during a livestream on his social media platform X that the plant, called "Terafab," will be built near Tesla's campus and gigafactory in eastern Travis County. The long-anticipated project is a joint venture between Musk-owned properties Tesla, SpaceX and xAI... The Terafab plant is expected to begin production in 2027. Musk "has said the semiconductor industry is moving too slow to keep up with the supply of chips he expects to need," writes Bloomberg — quoting Musk as saying "We either build the Terafab or we don't have the chips, and we need the chips, so we build the Terafab." Musk detailed some specific plans, including producing chips that can support 100 to 200 gigawatts a year of computing power on Earth, and chips that can support a terawatt in space, but gave no timelines for the facility or its output... The facility is expected to make two types of chips, one of which will be optimized for edge and inference, primarily for his vehicle, robotaxi and Optimus humanoid robots. The other will be a high-power chip, designed for space that could be used by SpaceX and xAI... Musk said he expects xAI to use the vast majority of the chips. During the presentation, Musk also unveiled a speculative rendering of a future "mini" AI data center satellite, one piece of a much larger satellite system that he wants SpaceX to build to do complex computing in space. In January, SpaceX requested a license from the Federal Communications Commission to launch one million data center satellites into orbit around Earth. Musk said that the mini satellite he revealed would have the capacity for 100 kilowatts of power. "We expect future satellites to probably go to the megawatt range," Musk said. Raising money to build and launch AI data centers in space is one of the driving forces behind SpaceX's planned IPO later this year. SpaceX is expected to raise as much as $50 billion in a record-setting IPO this summer which could value it at more than $1.75 trillion, Bloomberg News reported earlier. Read more of this story at Slashdot.

Read the full article