Apple’s HomeKit is a user-friendly platform for anyone who owns an iOS device and is interested in a simple, reliable DIY smart-home system. With a couple of HomeKit-compatible devices and a recent iPhone or iPad, it’s realistic to start from scratch and have an integrated system up and running in a few minutes. Apple doesn’t yet offer a voice-controlled speaker like the Google Home or Amazon Echo (the HomePod should be available in early 2018), but you can control your HomeKit system via the Siri voice assistant built into Apple phones, tablets, computers, and watches, or through smartphone and tablet apps.
Apple might start to converge iOS and macOS in a big way next year by letting developers create a single app that runs across both platforms. Bloomberg reports that Apple is planning to let developers create apps that will adjust to whichever platform they’re running on, so that they’ll support touch input on an iPhone or an iPad and mouse and trackpad input on a Mac.
The report notes that plans could always change, but it sounds like the combined apps could become available next year. If so, they’d likely be announced in June at Apple’s Worldwide Developers Conference and then introduced in the fall, when new versions of iOS and macOS typically ship.
Though Apple is far from the first to attempt such a crossover, the move would still be a very big deal. While the Mac has its share of very good apps, iOS is by far the more vibrant ecosystem. By letting developers create for two platforms at once, Apple can potentially make it easier and faster for updates and new apps to arrive on the Mac.
The really big open question is how all of this will work. It isn’t clear whether this means that macOS will emulate portions of iOS, or if developers will still largely have to code two separate apps. The closer the two platforms are, the easier it’ll be for developers — but the bigger the risk of users on one platform getting an interface that feels like it was designed for other types of input.
Microsoft has been trying this exact same thing with Universal Windows Platform apps for a while now, allowing developers to create apps that run across Windows, Windows phones, the Xbox, HoloLens, and even the Surface Hub. This has had limited success, in part because there’s always going to be one or more platforms that developers just don’t care to serve. Google has started going down the same road, bringing Android apps over to Chrome OS.
(Via The Verge – All Posts)
We spent more than two months testing 15 indoor Wi-Fi home security cameras, evaluating motion and sound sensitivity, alert types and frequency, speaker and microphone sound quality, smartphone apps, storage options, placement flexibility, and image quality, and the Logitech Logi Circle is the best choice for most people. The Logitech Logi Circle camera was the easiest to set up, the most flexible to place, and the most intuitive to use of all the cameras we tested.
Two more iMac Pro impressions have been posted, with benchmarks from both showing massive gains in processing power in the iMac Pro over older models — plus the inclusion of AVX-512 vector processing optimization in the W-series Xeon processor giving an added push to properly optimized apps.
Airbnb is developing virtual and augmented reality features to help guests find and navigate rental listings, the company announced on its blog today. Three-dimensional scans and 360-degree photos would allow users to get a better sense of a listing, and augmented reality overlays could help guests better understand the homes on a smaller scale once they’re in it. The company has been looking into VR to build trust between guests and hosts since last year, and this announcement confirming experiments and prototypes could mean a feature is coming soon.
It makes sense to take a look around a listing in VR before booking, but Airbnb also lists a few instances where augmented reality might be helpful during your trip. If you’re staying overseas and everything’s in a foreign language, it could be difficult to unlock the door, or figure out the thermostat or the hot water in the shower. It also lists a fun example of AR usefulness like “pulling up a mobile device to get directions to the coffee mugs … first thing in the morning.” This is a problem that could easily be solved by just opening up a few cupboards and figuring it out, but I get what they’re saying.
Virtual tours have been around in the real estate industry for a while, and Airbnb hosts have been asking for a feature to integrate them into their own listings. While the VR technologies could be implemented by providing Airbnb hosts with 360-degree cameras, it might be harder to integrate the AR concept for individual listings. Since ARKit was released on iOS 11, we’ve seen some concepts of what an AR overlay for Airbnb listings might look like. It seems like a useful and worthwhile idea, but it might be a more difficult challenge for Airbnb to take on.
But until this is a reality, I’ll still be satisfied with charming hand-written binders with tips on the local sights and illustrations on how to figure out the thermostat.
(Via The Verge – All Posts)
Shazam! Apple is buying the British software company behind the popular music tagging and recognition app. TechCrunch first reported the deal as likely happening last Friday, and today the company officially confirmed the acquisition to Buzzfeed. and the Financial Times.
After more than 40 hours of research and testing, including time spent making eight photo books and consulting with a master printer on the results, we recommend Shutterfly as the best online photo book service for people who don’t use Macs. For Mac users, we recommend the cheaper, better Apple Photo Books.
Backpacks are great at carrying stuff around, but even the nicest bags are still just canvas and leather sacks that we wear on our backs.
But there’s no product that crowdfunding won’t try to make “smart,” which brings us to Visvo. It’s a new company that’s looking to get funding on Kickstarter for a line of smart backpacks, and it has some interesting ideas of what a technology-focused bag should look like.
For example, there’s the usual integrated USB battery pack, which lets you recharge your gear on the go, but it also powers both external lights for biking, internal LEDs for lighting up the inside of the bag, and even an optional GPS tracker in case your bag gets stolen. One of the models even includes a wireless Qi charging pad integrated into one of the pockets, which is either the most ridiculous or most brilliant thing I’ve ever heard.
The best addition might be the shoulder straps: each one has powerful magnets to hold your headphones — or maybe even catch your AirPods if they fall out of your ears.
There is also a plethora of USB ports hidden around the backpack in addition to the three built into the battery pack (one on the inside of the bag, and one built into a zip pocket on right shoulder strap for charging your phone or wireless headphones). All the internal wiring can be accessed through a handy back zipper, so you don’t have to dig through all your stuff to recharge the internal battery, too.
The rest of the Novel line is pretty standard for a backpack: the outside is a treated canvas that claims to be water and stain resistant, there’s a laptop pocket, and some shock-absorbing rubber on the bottom to protect your gear.
All those extra smarts don’t come cheap, though. The base model Novel 1.0 starts at €249 for early-bird pricing (around $293), with the larger Novel 2.0 at €269 (roughly $316), and the most feature-laden Novel 3.0 at €299 (around $352). The bags aren’t expected to ship until July 2018, either, which is something to take into account, especially seeing as this is a crowdfunded project with all the usual risks that can entail.
(Via The Verge – All Posts)
Lots of kids will be gifted connected toys this holiday season, and while I’m all for spoiling children, I also suggest thinking about the risks that come with an internet-connected plaything. Many of these devices connect to the web or rely on companion apps, and most of them collect data about your child. Beyond security vulnerabilities, the way these companies treat data is worth considering.
We may also use, store, process, convert, transcribe, analyze or review voice recordings (along with text and transcriptions derived from the voice recordings) in order to provide, maintain, analyze and improve the functioning of the speech processing services, to develop, test or improve speech recognition technology and artificial intelligence algorithms, to develop acoustic and language models, and for other research and development and data analysis purposes.
Basically, the data your child gives to Hello Barbie, whether it’s intentional or not, lives on ToyTalk’s servers and the company can access it whenever it wants. It shares information with other third parties, too, including vendors that help maintain the technology.
Now, it might not bother you that your kid’s voice data improves ToyTalk’s software. You’re technically improving their own toy, too. But at the same time, once that data is out there, it’s difficult to recall. Data often spreads far and wide, and it’s transferred from company to company during acquisitions. Who knows where those files could ultimately end up.
(Via The Verge – All Posts)
Apple’s secretive autonomous car project has shifted focus over the years, but this year, it seems to be picking up speed. In April, the company received a permit to test self-driving cars in California, while in June, Apple CEO Tim Cook confirmed that they were working on software that could allow cars — and maybe other things — to drive themselves. During a talk on Friday, Apple’s director of artificial intelligence research, Ruslan Salakhutdinov, spoke about some of the company’s recent advances in machine learning that would be useful for such a project.
Wired reports that Salakhutdinov spoke before a group of AI experts at the end of this year’s Neural Information Processing Systems (NIPS) conference in Long Beach, California. There, he spoke about how Apple is using machine learning to analyze data to from a vehicle’s cameras. He talked about techniques used in a recently published study on the advances that the company has made in using AI to detect pedestrians and cyclists using LiDAR. But he also revealed efforts on some other projects: software that uses a car’s cameras to identify objects such as cars and pedestrians, as well as the drivable lanes on the road. He also showed off images that demonstrated how the system performed even when camera lenses were obscured by raindrops, and how their software could infer where pedestrians were, even when they were obscured by parked cars.
Salakhutdinov also discussed how their software was interpreting the data that it was being fed. One uses a technique called SLAM to allow the software to have a sense of direction, something that’s used in map building and augmented reality, while another project takes the data from the cars and uses it to help build maps with more detail. According to Wired, he didn’t speak specifically about how these projects fit into Apple’s project, but it seems as though Apple’s focus will be on developing the brains that will eventually steer the cars safely.
(Via The Verge – All Posts)