|
Microsoft's Copilot AI service is set to run locally on PCs, Intel told Tom's Hardware. The company also said that next-gen AI PCs would require built-in neural processing units (NPUs) with over 40 TOPS (trillion operations per second) of power — beyond the capabilities of any consumer processor on the market.
Intel said that the AI PCs would be able to run "more elements of Copilot" locally. Currently, Copilot runs nearly everything in the cloud, even small requests. That creates a fair amount of lag that's fine for larger jobs, but not ideal for smaller jobs. Adding local compute capability would decrease that lag, while potentially improving performance and privacy as well.
Microsoft was previously rumored to require 40 TOPS on next-gen AI PCs (along with a modest 16GB of RAM). Right now, Windows doesn't make much use of NPUs, apart from running video effects like background blurring for Surface Studio webcams. ChromeOS and macOS both use NPU power for more video and audio processing features, though, along with OCR, translation, live transcription and more, Ars Technica noted.
So far, the processor with the fastest NPU speed is Apple M3, which offers 18 TOPS across the lineup (M3, M3 Pro and M3 Ultra). AMD's Ryzen 8040 and 7040 laptop
|
|
GitHub, the online developer platform that allows users to create, store, manage, and share their code, has been on a generative AI (genA) journey since before ChatGPT or Copilot was widely available to the public.
Through an early partnership with Microsoft, the dev platform adopted Copilot two-and-a-half years ago, tweaking it to create its own version — GitHub Copilot.
The genAI-baed conversational chat interface is now used as a tool for both GitHub users and internal employees to assist in code development, as well as an automated help desk tool.
To read this article in full, please click here
|
|