Richard Lai


Xiaomi just couldn't wait until MWC to unveil its latest Leica-endorsed flagship phone. Following the 12S Ultra and 13 Ultra, Xiaomi is finally catching up with the competition by picking up Sony's second-gen 1-inch mobile camera sensor, the LYT-900, for its brand new 14 Ultra flagship phone. This marks the second device to don this crème de la crème of imaging silicons, after Oppo's Hasselblad-tuned Find X7 Ultra from early January. That said, the Xiaomi 14 Ultra does have a slight edge with its faster main variable aperture at up to f/1.63, beating the Find X7 Ultra's f/1.8 — on paper, at least.

With the exception of the faster f/2.5 aperture on the new 120mm periscope shooter, the remaining three Summilux rear cameras are almost identical to the previous set, and they are still powered by a Sony IMX858 sensor each. The biggest change in this field is the new Xiaomi AISP neural chip, which claims to be the world's first AI large-model computational platform for photography. This leverages four large models — "FusionLM," "ToneLM," "ColorLM" and "PortraitLM" — to fine-tune results, especially with digital zoom at 30x or more.

Xiaomi 14 Ultra
Xiaomi

The 14 Ultra also packs some surprises in the battery, durability and connectivity categories. As seen in the super-slim Mix Fold 3 and Honor Magic V2, the 14 Ultra is Xiaomi's first candybar to jump onto the silicon carbon cell bandwagon, in order to pack 5,300mAh of juice into a space that's 8 percent smaller. Xiaomi claims that compared to the previous model, you get 17-percent more stamina with this battery upgrade. To replenish the battery, you get both 90W of wired fast charging and 80W of wireless fast charging — these take 12.5 minutes and 20 minutes to reach 50 percent, respectively.

Going along with the "Year of the Dragon" theme, Xiaomi claims that the 14 Ultra's "Dragon Armor" structure has double the bending resistance, thanks to its special "6M42" aluminum alloy mid-frame (supposedly crafted with a better grip as well). The Chinese brand even claims that this part is 8-percent stronger than the iPhone 15 Pro's titanium frame." aluminum alloy mid-frame (supposedly crafted with a better grip as well). The Chinese brand even claims that this part is 8-percent stronger than the iPhone 15 Pro's titanium frame, but it decided to offer a more premium titanium version as well.

Xiaomi 14 Ultra
Xiaomi

This metallic frame is complemented by a "Dragon Crystal" glass — shielding the 6.73-inch AMOLED screen (3,200 x 1,440 120Hz; made by TCL CSOT) — with apparently 10 times more drop resistance. Xiaomi also touts its new vegan leather material, which has been certified by SGS to have six times more wear resistance, has more dirt resistance and is less prone to yellowing due to ultraviolet rays — an important breakthrough particularly for the white version. But if you prefer something shiny, the 14 Ultra is also available in a blue "Dragon Crystal" ceramic flavor, which resembles ceramic but isn't as heavy — it only weighs 5 grams more than its vegan leather counterpart. Regardless of the cover material, this device has IP68 rating for dust and water resistance.

Much like the 14 and 14 Pro from October (and the SU7 electric sedan's in-car entertainment system), the 14 Ultra runs on Xiaomi's Android-based HyperOS, and it's powered by Qualcomm's latest Snapdragon 8 Gen 3 processor. This is cooled by a dual-loop vapor chamber, which also sucks heat out of the camera modules. The processor is backed by Xiaomi's new proprietary chip, the Surge T1, which apparently boosts cellular connectivity by up to 37 percent, as well as Wi-Fi and Bluetooth connections by up to 16 percent.

Xiaomi 14 Ultra
Xiaomi

This device also supports two-way satellite calling and texting, now with 60-percent faster satellite locking and 29-percent faster satellite connection. As a bonus, when you're lost, you can send your location data along with vital signs from your wearable device — presumably exclusive to one of the latest Xiaomi watches or smart bands. Sadly, these satellite features are likely limited to China for now.

We'll likely be hearing about the Xiaomi 14 Ultra's global launch at MWC next week, but for now, we can refer to the Chinese pre-order pricing. The vegan leather and ceramic variants all start from 6,499 yuan (about $900) for the 12GB of RAM and 256GB of storage configuration, and max out at 7,799 yuan ($1,080) with 16GB of RAM and 1TB of storage. These will be available for retail from February 27. The titanium version with dark gray vegan leather is based on the top configuration but costs an extra 1,000 yuan ($140), and it won't be available until March 12.

Xiaomi 14 Ultra Titanium Edition
Xiaomi

Likes its predecessor, the 14 Ultra has an optional photography kit with a shutter button grip that adds an extra 1,500mAh of power. The upgrade this time is the new video recording button, along with a customizable jog dial. You can get this accessory for 699 yuan ($100) as a bundle with the phone.

This article originally appeared on Engadget at https://www.engadget.com/xiaomi-14-ultra-combines-a-1-inch-camera-sensor-with-four-ai-imaging-models-131127654.html?src=rss

Xiaomi 14 Ultra combines a 1-inch camera sensor with four ...




Following its autonomous food delivery launch in Miami and Fairfax, Virginia, Uber Eats will soon be offering the same robotic service in Japan — its first outside the US. It is once again collaborating with Google alum startup Cartken, with local compliance help from Mitsubishi Electric, to bring a fleet of Model C sidewalk delivery robots to select areas in Tokyo in March. Uber Eats Japan CEO Shintaro Nakagawa says the autonomous delivery service will solve the local labor shortage issue, while complementing the existing human delivery methods "by bicycle, motorbike, light cargo, and on foot."

Cartken's six-wheeled Model C uses six cameras and advanced AI models for autonomous driving plus obstacle detection, and remote control mode is available when needed. With guidance from Mitsubishi, the robot has been modified to suit local needs in Japan. For one, its speed is capped at 5.4 km/h or about 3.36 mph as per local regulation, which is a lot slower than the 6 mph top speed it's actually capable of. The loading capacity has also been reduced from 1.5 cubic feet to about 0.95 cubic feet (27 liters), likely due to the extra thermal insulation in the compartment. Uber Eats adds that for the sake of privacy, people's faces are automatically masked in footage captured by the robots.

While this is Uber Eats' robotic delivery debut in Japan, Cartken already has a presence there thanks to Mitsubishi. Since early 2022, the duo has worked with Starbucks, local e-commerce giant Rakuten and supermarket chain Seiyu in some parts of Japan. In the US, Cartken also has a partnership with Grubhub to provide autonomous food delivery service on college campuses, including the Ohio State University and the University of Arizona.

Even though Uber Eats has yet to share which Tokyo restaurants will be tapping into its robotic delivery service, it should have no problem seeking partnership given Cartken's prior local experience. That said, I highly doubt that the pair would risk trialing their robots through a crowd of drunkards in Shibuya just yet.

This article originally appeared on Engadget at https://www.engadget.com/uber-eats-expands-its-autonomous-food-delivery-service-to-japan-092727592.html?src=rss

Uber Eats expands its autonomous food delivery service to Japan


Instant messaging app Signal is best known for its privacy-related settings, though with phone numbers being the heart of the platform since its inception, there was no way to fully hide your own number until now. Earlier today, Signal announced that you'll soon be able to create a unique username (not to be mistaken with your profile name), which you can share with others via a link or QR code — as opposed to sharing your number. You'll be able to change your unique username as often as you want, but it needs to contain two or more numbers at the end, as part of Signal's anti-spoofing efforts. You can even delete your username entirely, as it is an optional feature.

Naturally, you'll still need a phone number to sign up for Signal, but note that with the new default, your number will no longer be visible to everyone (you can change this setting manually, if needed). There will also be a new option which prevents people from finding you by your number; they will need to have your exact unique username to do so. In other words, people who already have your number won't necessarily know that you are also on Signal, which is a good thing if you prefer to stay anonymous in this platform's public groups.

As is the case with any new feature, the likes of spammers and scammers will eventually find a way to abuse usernames, as you won't be able to verify their numbers instantly. Pro tip: when you see new contacts that appear to be your acquaintances, always double check with them through other means — preferably in person, or at least via a phone call. You may look out for these new Signal features in a few weeks' time, or you can get an early taste in the beta release.

This article originally appeared on Engadget at https://www.engadget.com/signal-usernames-will-keep-your-phone-number-private-050008243.html?src=rss

Signal usernames will keep your phone number private







In an age where fraudsters are using generative AI to scam money or tarnish one's reputation, tech firms are coming up with methods to help users verify content — at least still images, to begin with. As teased in its 2024 misinformation strategy, OpenAI is now including provenance metadata in images generated with ChatGPT on the web and DALL-E 3 API, with their mobile counterparts receiving the same upgrade by February 12.

The metadata follows the C2PA (Coalition for Content Provenance and Authenticity) open standard, and when one such image is uploaded to the Content Credentials Verify tool, you'll be able to trace its provenance lineage. For instance, an image generated using ChatGPT will show an initial metadata manifest indicating its DALL-E 3 API origin, followed by a second metadata manifest showing that it surfaced in ChatGPT.

Despite the fancy cryptographic tech behind the C2PA standard, this verification method only works when the metadata is intact; the tool is of no use if you upload an AI-generated image sans metadata — as is the case with any screenshot or uploaded image on social media. Unsurprisingly, the current sample images on the official DALL-E 3 page returned blank as well. On its FAQ page, OpenAI admits that this isn't a silver bullet to addressing the misinformation war, but it believes that the key is to encourage users to actively look for such signals.

While OpenAI's latest effort on thwarting fake content is currently limited to still images, Google's DeepMind already has SynthID for digitally watermarking both images and audio generated by AI. Meanwhile, Meta has been testing invisible watermarking via its AI image generator, which may be less prone to tampering.

This article originally appeared on Engadget at https://www.engadget.com/chatgpt-will-digitally-tag-images-generated-by-dall-e-3-to-help-battle-misinformation-102514822.html?src=rss

ChatGPT will digitally tag images generated by DALL-E 3 to ...



Currently serving over 70 million daily active users, Roblox is still going strong since its September 2006 launch — almost 18 years ago. The development team is now taking one step further to boost the platform's massive community, by way of providing real-time AI chat translation to connect gamers around the world. According to CTO Daniel Sturman, his team needed to build their own "unified, transformer-based translation LLM (large language model)" in order to seamlessly handle all 16 languages supported on Roblox, as well as to recognize Roblox-specific slangs and abbreviations (this writer just learned that "obby" refers to an obstacle course in the game).

As a result, the chat window always displays the conversation in the user's own tongue — with a small latency of around 100 milliseconds, so it's pretty much real time. You can also click on the translation icon on the left of each line to see it in its original language. Sturman claims that thanks to the language model's efficient architecture and iterative training, it "outperforms commercial translation APIs on Roblox content." The development team will later roll out a feedback tool to help improve translation quality, in addition to its ongoing updates with whatever new catchphrases it picks up on the platform.

Roblox built its own large language model to support real-time chat translation for all 16 languages on its platform. It recognizes Roblox-specific slang and abbreviations.
Roblox

Roblox's translation efforts don't stop there. Sturman adds that his team is already looking into automatically translating "text on images, textures, 3D models" and more. As Roblox supports voice chat, the exec also teases the possibility of automatic voice chat translations, so gamers from around the world can seamlessly talk to one another in their own tongue on the platform. Given that Samsung already offers a similar feature via Galaxy AI, it probably won't be long before we hear another update from Roblox on this end.

This article originally appeared on Engadget at https://www.engadget.com/roblox-adds-real-time-ai-chat-translation-using-its-own-language-model-061929902.html?src=rss

Roblox adds real-time AI chat translation using its own language ...


One of Vision Pro's most intriguing features is undoubtedly the EyeSight display, which projects a visual feed of your own eyes to better connect with people in the real world — because eye contact matters, be it real or virtual. As iFixit discovered in its teardown, it turns out that Apple leveraged stereoscopic 3D effect as an attempt to make your virtual eyes look more life-like, as opposed to a conventional "flat" output on the curved OLED panel. This is achieved by stacking a widening optical layer and a lenticular lens layer over the OLED screen, which is why exposing the panel will show "some very oddly pinched eyes." The optical nature of the added layers also explain the EyeSight display's dim output. Feel free to check out the scientific details in the article.

While iFixit has yet to do more analysis before it can give the Vision Pro a repairability score, so far we already know that the front glass panel "took a lot of heat and time" to detach from the main body. That said, the overall modular design — especially the speakers and the external battery — should win some points. As always, head over to iFixit for some lovely close-up shots of the teardown process.

This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-teardown-deconstructs-the-weird-looking-eyesight-display-083426548.html?src=rss

Apple Vision Pro teardown deconstructs the weird-looking EyeSight display