|
Unlike some Android phones, iPhones don't have a dedicated notification LED that lights up when you get a call, text, or other alert. What iPhones do include is an optional Accessibility feature for the deaf and hard of hearing that blinks the rear camera flash and provides a visual cue for incoming notifications. And in iOS 26.2, Apple has added the ability to flash the front display, too.
|
|
OpenAI rebuilds ChatGPT Images to challenge Google's Nano Banana, bringing faster image generation, editing, and conversational iteration into one creative workflow.
The post Nano Banana Faces New Rival as OpenAI Rolls Out ChatGPT Images Overhaul appeared first on eWEEK.
|
|
Back in September during Meta Connect, the company previewed a new ability for its smart glasses lineup called Conversation Focus. The feature, which is able to amplify the voices of people around you, is now starting to roll out in the company's latest batch of software updates.
When enabled, the feature is meant to make it easier to hear the people you're speaking with in a crowded or otherwise noisy environment. "You'll hear the amplified voice sound slightly brighter, which will help you distinguish the conversation from ambient background noise," Meta explains. It can be enabled either via voice commands ("hey Meta, start Conversation Focus") or by adding it as a dedicated "tap-and-hold" shortcut.
Meta is also adding a new multimodal AI feature for Spotify. With the update, users can ask their glasses to play music on Spotify that corresponds with what they're looking at by saying "hey Meta, play a song to match this view." Spotify will then start a playlist "based on your unique taste, customized for that specific moment." For example, looking at holiday decor might trigger a similarly-themed playlist, though it's not clear how Meta and Spotify may translate more abstract concepts into themed playlists.
Both updates are starting to roll out now to Meta Ray-Ban glasses (both Gen 1 and Gen 2 m
|
|
OpenAI's Image Model 1.5 is out now, and it comes with a new creative studio for editing.
|
|
Following the release of GPT-5.2 last week, OpenAI has begun rolling out a new image generation model. The company says the updated ChatGPT Images is four times faster than its predecessor. If you're a frequent ChatGPT user, you'll know it can sometimes take a while for OpenAI's servers to create images, particularly during peak times and if you're not paying for ChatGPT Plus. In that respect, any improvement in speed is welcome.
The new version is also better at following instructions, including when you want to edit something the new model just generated. You can ask the system to add, subtract, combine, blend and even transpose elements. At the same time, OpenAI says the update offers better text rendering. That's something many image models have traditionally struggled with, but according to the company, the new ChatGPT Images is capable of handling denser and smaller text. As part of the today's model update, OpenAI is additionally adding a dedicated Images section to the ChatGPT sidebar. Here you'll find preset filters and prompts you can look to for inspiration.
The new ChatGPT Images arrives just as Nano Banana Pro is responsible for a surge in Gemini usage. In October, Google said its chatbot had 650 million users, up from 450 million just a few months earlier in July. Nano Banana Pro has proven so popular, the company recently
|
|
In line with previous rumors, The Information today reported that Apple is planning to release a special 20th-anniversary iPhone less than two years from now.
|
|