Richard Lai
Microsoft’s upcoming custom chip will be made by Intel
Following its autonomous food delivery launch in Miami and Fairfax, Virginia, Uber Eats will soon be offering the same robotic service in Japan — its first outside the US. It is once again collaborating with Google alum startup Cartken, with local compliance help from Mitsubishi Electric, to bring a fleet of Model C sidewalk delivery robots to select areas in Tokyo in March. Uber Eats Japan CEO Shintaro Nakagawa says the autonomous delivery service will solve the local labor shortage issue, while complementing the existing human delivery methods "by bicycle, motorbike, light cargo, and on foot."
Cartken's six-wheeled Model C uses six cameras and advanced AI models for autonomous driving plus obstacle detection, and remote control mode is available when needed. With guidance from Mitsubishi, the robot has been modified to suit local needs in Japan. For one, its speed is capped at 5.4 km/h or about 3.36 mph as per local regulation, which is a lot slower than the 6 mph top speed it's actually capable of. The loading capacity has also been reduced from 1.5 cubic feet to about 0.95 cubic feet (27 liters), likely due to the extra thermal insulation in the compartment. Uber Eats adds that for the sake of privacy, people's faces are automatically masked in footage captured by the robots.
While this is Uber Eats' robotic delivery debut in Japan, Cartken already has a presence there thanks to Mitsubishi. Since early 2022, the duo has worked with Starbucks, local e-commerce giant Rakuten and supermarket chain Seiyu in some parts of Japan. In the US, Cartken also has a partnership with Grubhub to provide autonomous food delivery service on college campuses, including the Ohio State University and the University of Arizona.
Even though Uber Eats has yet to share which Tokyo restaurants will be tapping into its robotic delivery service, it should have no problem seeking partnership given Cartken's prior local experience. That said, I highly doubt that the pair would risk trialing their robots through a crowd of drunkards in Shibuya just yet.
This article originally appeared on Engadget at https://www.engadget.com/uber-eats-expands-its-autonomous-food-delivery-service-to-japan-092727592.html?src=rssUber Eats expands its autonomous food delivery service to Japan
Instant messaging app Signal is best known for its privacy-related settings, though with phone numbers being the heart of the platform since its inception, there was no way to fully hide your own number until now. Earlier today, Signal announced that you'll soon be able to create a unique username (not to be mistaken with your profile name), which you can share with others via a link or QR code — as opposed to sharing your number. You'll be able to change your unique username as often as you want, but it needs to contain two or more numbers at the end, as part of Signal's anti-spoofing efforts. You can even delete your username entirely, as it is an optional feature.
Naturally, you'll still need a phone number to sign up for Signal, but note that with the new default, your number will no longer be visible to everyone (you can change this setting manually, if needed). There will also be a new option which prevents people from finding you by your number; they will need to have your exact unique username to do so. In other words, people who already have your number won't necessarily know that you are also on Signal, which is a good thing if you prefer to stay anonymous in this platform's public groups.
As is the case with any new feature, the likes of spammers and scammers will eventually find a way to abuse usernames, as you won't be able to verify their numbers instantly. Pro tip: when you see new contacts that appear to be your acquaintances, always double check with them through other means — preferably in person, or at least via a phone call. You may look out for these new Signal features in a few weeks' time, or you can get an early taste in the beta release.
This article originally appeared on Engadget at https://www.engadget.com/signal-usernames-will-keep-your-phone-number-private-050008243.html?src=rssSignal usernames will keep your phone number private
The world’s thinnest foldable phone gets a Porsche Design makeover
New York City is suing social media companies for allegedly ...
Dyson’s new lightweight ‘Supersonic r’ hairdryer looks a lot like ...
In an age where fraudsters are using generative AI to scam money or tarnish one's reputation, tech firms are coming up with methods to help users verify content — at least still images, to begin with. As teased in its 2024 misinformation strategy, OpenAI is now including provenance metadata in images generated with ChatGPT on the web and DALL-E 3 API, with their mobile counterparts receiving the same upgrade by February 12.
The metadata follows the C2PA (Coalition for Content Provenance and Authenticity) open standard, and when one such image is uploaded to the Content Credentials Verify tool, you'll be able to trace its provenance lineage. For instance, an image generated using ChatGPT will show an initial metadata manifest indicating its DALL-E 3 API origin, followed by a second metadata manifest showing that it surfaced in ChatGPT.
Despite the fancy cryptographic tech behind the C2PA standard, this verification method only works when the metadata is intact; the tool is of no use if you upload an AI-generated image sans metadata — as is the case with any screenshot or uploaded image on social media. Unsurprisingly, the current sample images on the official DALL-E 3 page returned blank as well. On its FAQ page, OpenAI admits that this isn't a silver bullet to addressing the misinformation war, but it believes that the key is to encourage users to actively look for such signals.
While OpenAI's latest effort on thwarting fake content is currently limited to still images, Google's DeepMind already has SynthID for digitally watermarking both images and audio generated by AI. Meanwhile, Meta has been testing invisible watermarking via its AI image generator, which may be less prone to tampering.
This article originally appeared on Engadget at https://www.engadget.com/chatgpt-will-digitally-tag-images-generated-by-dall-e-3-to-help-battle-misinformation-102514822.html?src=rssChatGPT will digitally tag images generated by DALL-E 3 to ...
Currently serving over 70 million daily active users, Roblox is still going strong since its September 2006 launch — almost 18 years ago. The development team is now taking one step further to boost the platform's massive community, by way of providing real-time AI chat translation to connect gamers around the world. According to CTO Daniel Sturman, his team needed to build their own "unified, transformer-based translation LLM (large language model)" in order to seamlessly handle all 16 languages supported on Roblox, as well as to recognize Roblox-specific slangs and abbreviations (this writer just learned that "obby" refers to an obstacle course in the game).
As a result, the chat window always displays the conversation in the user's own tongue — with a small latency of around 100 milliseconds, so it's pretty much real time. You can also click on the translation icon on the left of each line to see it in its original language. Sturman claims that thanks to the language model's efficient architecture and iterative training, it "outperforms commercial translation APIs on Roblox content." The development team will later roll out a feedback tool to help improve translation quality, in addition to its ongoing updates with whatever new catchphrases it picks up on the platform.
Roblox's translation efforts don't stop there. Sturman adds that his team is already looking into automatically translating "text on images, textures, 3D models" and more. As Roblox supports voice chat, the exec also teases the possibility of automatic voice chat translations, so gamers from around the world can seamlessly talk to one another in their own tongue on the platform. Given that Samsung already offers a similar feature via Galaxy AI, it probably won't be long before we hear another update from Roblox on this end.
This article originally appeared on Engadget at https://www.engadget.com/roblox-adds-real-time-ai-chat-translation-using-its-own-language-model-061929902.html?src=rssRoblox adds real-time AI chat translation using its own language ...
One of Vision Pro's most intriguing features is undoubtedly the EyeSight display, which projects a visual feed of your own eyes to better connect with people in the real world — because eye contact matters, be it real or virtual. As iFixit discovered in its teardown, it turns out that Apple leveraged stereoscopic 3D effect as an attempt to make your virtual eyes look more life-like, as opposed to a conventional "flat" output on the curved OLED panel. This is achieved by stacking a widening optical layer and a lenticular lens layer over the OLED screen, which is why exposing the panel will show "some very oddly pinched eyes." The optical nature of the added layers also explain the EyeSight display's dim output. Feel free to check out the scientific details in the article.
While iFixit has yet to do more analysis before it can give the Vision Pro a repairability score, so far we already know that the front glass panel "took a lot of heat and time" to detach from the main body. That said, the overall modular design — especially the speakers and the external battery — should win some points. As always, head over to iFixit for some lovely close-up shots of the teardown process.
This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-teardown-deconstructs-the-weird-looking-eyesight-display-083426548.html?src=rss