• Quotes
  • Shortcuts
The Executive's Internet
Sat, Apr 4th
icon
GoogleAmazonWikipedia


spacerspacer

 

 TECHNOLOGY NEWS
Setup News Ticker
   TECHNOLOGY NEWS
Searching for 'Here Could'. (Return)

Mac RumorsApr 03, 2026
Apple Seeds Revised iOS 26.5 and iPadOS 26.5 Betas to Developers
Apple today seeded revised first betas of upcoming iOS 26.5 and iPadOS 26.5 updates to developers for testing purposes, with the software coming four days after Apple seeded the initial betas.


Mac RumorsApr 03, 2026
This Music Selection Tweak in iOS 26.4 Will Save You Bags of Time
If you often find yourself adding a track to an Apple Music playlist, going back, and then adding it to other playlists, iOS 26.4 includes an option that could save you bags of time: You can now select multiple playlists when adding a song.


Mac RumorsApr 03, 2026
iFixit AirPods Max 2 Teardown: Same Design, Same Repairability Issues
Repair site iFixit today shared a teardown of Apple's new AirPods Max 2 headphones, and as expected, there are few changes. iFixit says the ?AirPods Max 2? are "basically the same" as the original AirPods Max headphones that came out in 2020.


EngadgetApr 02, 2026
Google releases Gemma 4, a family of open models built off of Gemini 3
When Google released Gemini 3 Pro at the end of last year, it was a significant step forward for the company's proprietary large language models. Now, the company is bringing some of the same technology and research that made those models possible to the open source community with the release of its new family of Gemma 4 open-weight models.

Google is offering four different versions of Gemma 4, differentiated by the number of parameters on offer. For edge devices, including smartphones, the company has the 2-billion and 4-billion "Effective" models. For more powerful machines, there's the 26-billion "Mixture of Experts" and 31-billion "Dense" systems. For the unfamiliar, parameters are the settings a large language model can tweak to generate an output. Typically, models with more parameters will deliver better answers than ones with less, but running them also requires more powerful hardware. 

With Gemma 4, Google claims it's managed to engineer systems with "an unprecedented level of intelligence-per-parameter." To back up this claim, the company points to the performance of Gemma 4's 31-billion and 26-billion variants, which claimed the third and sixth spots respectively on Arena AI's text leaderboard, beating out models 20 times their size.     

All of the models can process video and images, making them ideal for tasks like optical character recognition. The two smaller models are also capable of processing audio inputs and understanding speech. Separately, Google says the Gemma 4 family is capable of generating offline code, meaning you could use them to do vibe coding without an internet connection. Google has also trained the models in more than 140 languages.    

  • CEOExpress
  • c/o CommunityScape | 200 Anderson Avenue
    Rochester, NY 14607
  • Contact
  • As an Amazon Associate
    CEOExpress earns from
    qualifying purchases.

©1999-2026 CEOExpress Company LLC