Frustrating Apple Watch “features”
I’ve been an Apple Watch user since the beginning. Years ago I bartered for the original Series 1 from my landlord. A number of years later I bought a Series 3 off eBay. Most recently I was gifted an Ultra 2.
I am by no means an Apple Watch hater. I think the Apple Watch is very convenient and the most useful features for me include:
Siri access
Timers/alarms
Weather app/widget
Fitness tracking
But there’s a couple features that have really been frustrating me. One is old and one is brand new.
Let’s start with the older feature first.
Raise to Speak (raise to be annoyed)
When I first heard of “Raise to Speak” I thought: That’s awesome. Makes perfect sense. I’ll use this all the time. I won’t have to say “hey siri” all the time and I won’t have to hold the Digital Crown button down to manually trigger Siri. Reality has been a much different experience.
On Apple’s website the feature is explained like this: “Raise your wrist and speak into your Apple Watch.” Ha! If only it was that simple.
In preparing for this blog I googled this feature and found another blog claiming to have solved the inconsistency issue but even after trying their suggestion the consistency is not there. They suggested bringing the watch exaggeratedly up to your mouth real close before speaking. I’ve tried everything. I tried speeding up how quickly I raise my arm. I’ve tried tilting the watch even more aggressively towards my mouth. I’ve brought it really close to my mouth. I’ve tried speaking as I raise it. I’ve tried waiting to speak after I raise it.
This feature is supposed to be easy to use yet I’m a software engineer and I can’t even figure it out. 🤦🏻♂️I literally read the instruction manual when I get something new.
If you know the magic incantation to get the feature to work please let me know in the comments.
How does the newest Apple feature perform?
The New Double tap feature
What about the brand spanking new hardware-limited feature Apple is calling double tap? Well my new Apple Watch supports it and I was definitely eager to test it out. Especially in those obvious nose-tapping, one hand is tied up scenarios…boy was I in for a let down.
Now I recognize the double tap gesture is brand new so I give Apple a little more slack (but Raise to Speak has been supported for years and I give it no excuse).
The new double tap gesture is frustrated by at least 2 key things: 1) by the cryptic times when it does not work and 2) by the felt delay of the feature when it does work.
1. Scenarios it doesn’t work
It’s pretty difficult to know if the feature is supported on a specific screen/app or if it’s not. I understand that it doesn’t really make sense to have one default action for every page, some pages/apps can’t be boiled down to one default action. But the flip side is confusing as well. How do I know where or when the feature will work? For example, if I’m laying down on my side and try to double tap to dismiss a timer or pause music it doesn’t work at all. But if I sit up or stand it does work. But nowhere is this horizontal vs vertical orientation of the arm described on the website from what I can see. When you use the feature a little small hand icon appears, but it almost seems like they need to add that icon to the places where you can use it, and not only show it after you’ve tried and randomly succeeded (though I admit it would look really funny having that icon everywhere).
2. Significant Delay
It’s just too slow to feel good. Like trying to catch a ball through delayed VR goggles.
Playing some music on your phone and then pausing it is an easy way to see this. It takes about 2 seconds to perform that action… that’s pretty long. I would expect that the double tap would trigger the action almost immediately after detecting it. I wonder if the haptic feedback and the icon animation can just be replaced by the action or made to happen at the same time.
Here’s my theory for what went wrong: Apple had a really great feature idea and a proof of concept that everyone loved. Then they started testing it on other watches and realized it wasn’t working as well due to CPU limitations. So they focused on the most powerful watches (9 and Ultra 2). And then after testing it more they realized it still has some significant lag due to the computation required. So now we have a 3-step process that takes over 2 seconds: Double tap → haptic feedback and icon animation → action taken.
Conclusion: Reliability
Both of these features do not work reliably for me at all.
And when I say “reliably” I mean maybe 1 in 4 times at best which is really bad. I’m not sure the gold standard in user experience testing but I would imagine a new feature’s success to failure ratio needs to be extremely low to gain user adoption. Compare my experience with the AirPods Pro volume control feature which I was originally skeptical about… it works 95%+ of the time which honestly surprised me. I use it all the time and it’s very reliable.
For the price of these devices and for the quality I expect from Apple this should really be addressed.
Wearables tangent
Wearables are clearly a growing industry with lots of experimentation. I suppose I see the draw of smart glasses that extend Siri or put AI in quick grasp because I use my watch for something similar, but I don’t see myself getting a pair smart glasses anytime soon. I’m really not a glasses person at all (and I would hate to lose them at the current prices).
But if I could just ask my glasses with natural speech to show me a recipe for dinner or a YouTube video on how to fix the sink without obstructing my view or requiring my hands. Or if I could ask my glasses to overlay the translation of something I’m looking at in realtime I would definitely be tempted to try it. Or even better than smart glasses, let’s just go straight to smart contacts!
-Jesse