Only received AirPods a passive mention during the keynote at the Apple event. It’s understandable: the iPhone 15 and Apple Watch Series 9 (and Ultra 2) took center stage. Furthermore, the headphones did not receive the same hardware updates. As a press release issued after the event confirmed, the biggest physical change to the AirPods Pro 2 is the (admittedly long-awaited) arrival of a USB-C charging case.
You’d be forgiven for thinking the AirPods news ended there. However, Apple’s high-end earbuds also received a meaningful software update, in the form of new listening modes that can be accessed in iOS 17 with a few taps in both versions of the AirPods Pro 2 (USB-C and Lightning).
With the new models plugged in, swipe down to open Control Center and then long-press the volume slider. Three mode selections appear below: noise cancellation, conversational awareness, and spatial audio. They are the first two to receive love this year.
Adaptive audio has been added to the options, in addition to standard noise cancellation, transparency and off. Tapping the new option will highlight it with a rainbow background. The new feature seamlessly flashes between different settings in real time. It’s an attempt to bring both ends of the spectrum together, so that you can walk down a busy street with situational awareness, while not experiencing the full sound impact of the garbage truck passing by.
![](https://techcrunch.com/wp-content/uploads/2023/09/Apple-AirPods-Pro-2nd-gen-Adaptive-Audio-230912.jpg)
Image credits: Apple
Although it has the same name as last year’s Adaptive Transparency feature, Adaptive Audio offers a full spectrum of modes, involving both transparency and noise cancellation.
“Adaptive Transparency, which we announced last year, that needs to happen very quickly,” said Product Marketing Director Eric Treski in a conversation with TechCrunch. “That happens 40,000 times per second. That’s not just the monitoring, that’s also the reduction. To get that down quickly, it has to happen in real time. Adaptive audio is a little slower over the course of a few seconds, as it’s meant to be a much more methodical process of knowing what you’re listening to. We’re moving from adaptive audio to transparency, so – to make it less jarring and more comfortable – it’s purposely much slower for that reason.”
The system also takes into account whether the content you are listening to is music or a podcast. This is determined based on tags from apps such as Apple Music. A microphone also measures the volume in your ear to get a good idea of the volume you are experiencing. “Because if you only measure the loudness you think you’re playing into someone’s ears,” explains VP of Sensing and Connectivity Ron Huang, “depending on how they’re wearing it and other factors, it can be less accurate.”
Huang tells TechCrunch that the company has considered using your device’s GPS to determine noise levels based on location. However, in real-world testing, the method proved to be inefficient.
“During the early exploration of Adaptive Audio, we actually put you in ANC versus transparency, based on where you are,” says Huang. “You can imagine that the phone could give a hint to the AirPods and say, “Hey, you’re in the house,” and so on. That’s one way to do it, but after all our learning, we don’t think that’s the right way to do it, and that’s not what we did. Of course it is not always quiet in the house and it is not always noisy on the street. We decided that instead of relying on a location cue from the phone, the AirPods would monitor your environment in real time and intelligently make those decisions on their own.”
![AirPods Pro 2 with USB-C](https://techcrunch.com/wp-content/uploads/2023/09/Airpods-Pro-2-USB-C-3.jpg)
Image credits: Darrell Etherington
Personalized volume is also a big part of the Adaptive Audio experience. The system combines a collection of user data with personalized preferences to get a more complete picture of listening habits, combined with “machine learning to understand environmental conditions and listening preferences over time to automatically refine the media experience,” according to Apple. Various statistics are included.
“We used tens of thousands of hours of different data – different users listening to different content and with different background noise – to really understand the different listening preferences, and what distractors and aggressors are from a noise perspective, to keep your content really clear, ” Huang advertisements. “We also remember your personal preferences. Given a type of environment, the amount of noise there, how loudly you typically listen to your content, and remember it for yourself. We add it to our machine learning model to make it work even better for you.”
The other big mode introduced via iOS 17 is Conversational Awareness, which lowers the track volume when you start speaking. However, external voices will not produce the effect, only the carriers. Apple can achieve this effect without maintaining built-in voice profiles. Instead, it uses a number of built-in sensors. When the microphones hear a voice and the accelerometer detects jaw movements, the feature is activated. How long it takes depends on several factors. I was impressed with this feature’s ability to keep me from being triggered by things like a cleared throat or yawning.
The team also tried out another long-standing earbud bugbear: the switch. The five-second period between answering a call and hearing it through your earbuds feels like an eternity. To take advantage of the new switching speed, the user must be locked into the Apple ecosystem.
![](https://techcrunch.com/wp-content/uploads/2023/09/Apple-AirPods-Pro-2nd-gen-USB-C-connection-demo-230912.jpg)
Image credits: Apple
“Our AirPods’ connection times to our devices are much faster with this new software update,” said Huang. “That’s because of all the different ways we discover nearby devices. It’s very important for us to know what the iPhone does, what the iPad does, what the Mac does. A phone call is more important than music, so when you answer a phone call we make sure we take the route away from the iPhone and, for example, connect to your Mac for the phone call.”
The last big part of the AirPods announcement is Vision Pro connectivity. For the full audio experience, those using Apple’s upcoming spatial computing headset should bring the new AirPods Pro for ultra-low latency lossless audio.
“Bluetooth normally operates at 2.4 gigahertz, and that airspace is very noisy,” Huang says. “Everyone runs on 2.4. That’s why routers Wi-Fi routers, for example, are typically dual-band, if not tri-band, because the 5GHz spectrum is so much cleaner. To really get really low latency audio, and to get really high-fidelity, lossless audio, it’s all about having a very, very clean and real-time channel between two. The combination of 5Ghz and the fact that they are very close enabled us to do this. We are able to redesign a brand new audio protocol over 5GHz for AirPods.”