|
Apple today provided public beta testers with the first releases of upcoming iOS 26.5, iPadOS 26.5, macOS Tahoe 26.5, watchOS 26.5, and tvOS 26.5 updates for testing purposes. The public betas come four days after Apple provided the betas to developers, though Apple seeded updated iOS 26.5 and iPadOS 26.5 betas to developers earlier today.
|
|
Earlier this week, Apple seeded the first beta of iOS 26.5 to developers. The software update is relatively minor so far, which is not too surprising given that Apple is likely shifting its focus towards iOS 27. Apple is expected to unveil iOS 27 during its WWDC 2026 keynote on June 8, and the update should be released in September.
|
|
When Google released Gemini 3 Pro at the end of last year, it was a significant step forward for the company's proprietary large language models. Now, the company is bringing some of the same technology and research that made those models possible to the open source community with the release of its new family of Gemma 4 open-weight models.
Google is offering four different versions of Gemma 4, differentiated by the number of parameters on offer. For edge devices, including smartphones, the company has the 2-billion and 4-billion "Effective" models. For more powerful machines, there's the 26-billion "Mixture of Experts" and 31-billion "Dense" systems. For the unfamiliar, parameters are the settings a large language model can tweak to generate an output. Typically, models with more parameters will deliver better answers than ones with less, but running them also requires more powerful hardware.
With Gemma 4, Google claims it's managed to engineer systems with "an unprecedented level of intelligence-per-parameter." To back up this claim, the company points to the performance of Gemma 4's 31-billion and 26-billion variants, which claimed the third and sixth spots respectively on Arena AI's text leaderboard, beating out models 20 times their size.
All of the models can process video and images, making them ideal for tasks like optical character recognition. The two smaller models are also capable of processing audio inputs and understanding speech. Separately, Google says the Gemma 4 family is capable of generating offline code, meaning you could use them to do vibe coding without an internet connection. Google has also trained the models in more than 140 languages.
|
|