Starting May 1, 2024, Apple requires developers to declare reasons if their apps use APIs that can potentially be misused to collect unique device signals. These unique signals enable abusers to derive a device identifier or fingerprint and result in tracking user activities across different apps of different developers. Such APIs are referred to as required reason API. To prevent misuse of these APIs, Apple will reject apps that don’t describe their use of the APIs in their privacy manifest file. However, we found out that apps such as Google Chrome, Instagram, Spotify, and Threads don’t adhere to their declared reasons.
Background
In WWDC 2023, Apple announced a new privacy measure to prevent device fingerprinting, a practice prohibited by Apple Developer Program License Agreement. Apple published a preliminary list of APIs that have the potential of being misused to collect unique device signals. Before using any of these APIs, developers need to declare the reason for using the APIs in a privacy manifest file.
The list will change over time as more APIs will be added as needed. Apple has provided a list of approved reasons for each API. Developers have to review their code and then pick the approved reason that best describes their use of the API. In addition, developers are also responsible for third-party SDKs included in their apps and, therefore; are required to describe their use of required reason API. Apple also published a list of SDKs that require a privacy manifest and signature.
Maintaining the privacy manifest to pass Apple’s App Review process is another burden that developers have to take. Is it worth the overhead?
In Practice
Describing the use of required reason API is a great attempt to prevent fingerprinting. The process also educates developers about such APIs and why access to them should be minimized and signals retrieved from the APIs should remain on-device and never be sent off-device. But this is in theory.
In practice, we analyzed the network traffic of several popular apps that were updated after May 1, when this new requirement took effect. We focused on the API that retrieves a device’s boot time, or system uptime. It is the elapsed time in seconds since a device was restarted. Combined with a few other signals, the system uptime leads to generating a very accurate fingerprint of a device.
The use of system boot time APIs requires declaring an approved reason. The possible approved reasons for using the system uptime API are as follows:
35F9.1
Declare this reason to access the system boot time in order to measure the amount of time that has elapsed between events that occurred within the app or to perform calculations to enable timers.
Information accessed for this reason, or any derived information, may not be sent off-device. There is an exception for information about the amount of time that has elapsed between events that occurred within the app, which may be sent off-device.
8FFB.1
Declare this reason to access the system boot time to calculate absolute timestamps for events that occurred within your app, such as events related to the UIKit or AVFAudio frameworks.
Absolute timestamps for events that occurred within your app may be sent off-device. System boot time accessed for this reason, or any other information derived from system boot time, may not be sent off-device.
3D61.1
Declare this reason to include system boot time information in an optional bug report that the person using the device chooses to submit. The system boot time information must be prominently displayed to the person as part of the report.
Information accessed for this reason, or any derived information, may be sent off-device only after the user affirmatively chooses to submit the specific bug report including system boot time information, and only for the purpose of investigating or responding to the bug report.
All the approved reasons emphasize that information retrieved by the APIs may not be sent off-device.
The following sections show how apps adhere to their declared reasons.
Facebook
Facebook declares the approved reason 35F9.1 for accessing the system boot time, as shown in its privacy manifest file (extracted from the app binaries):
Our testing shows that Facebook still sends the system uptime off-device. This is a screenshot of the request:
Google Chrome
Google Chrome declares the approved reason 35F9.1 for accessing the system boot time, as shown in its privacy manifest file (extracted from the app binaries):
As shown above, Reason 35F9.1 instructs developers not to send the accessed information off-device. However, our testing shows that Google Chrome still sends the system uptime off-device. This is a screenshot of the request:
Instagram
Instagram also declares the approved reason 35F9.1 for accessing the system boot time, as shown in its privacy manifest file (extracted from the app binaries):
Our testing shows that Instagram still sends the system uptime off-device. This is a screenshot of the request:
Spotify
Spotify declares the approved reasons 35F9.1 and 8FFB.1 for accessing the system boot time, as shown in its privacy manifest file (extracted from the app binaries):
As shown above, both declared reasons instruct developers not to send the accessed information off-device. However, our testing shows that Spotify still sends the system uptime off-device. This is a screenshot of the request:
Threads
Threads, an Instagram app, declares the approved reason 35F9.1 for accessing the system boot time, as shown in its privacy manifest file (extracted from the app binaries):
Our testing shows that Instagram still sends the system uptime off-device. This is a screenshot of the request:
Final Words
While forcing developers to describe their use of required reason API is a great starting point to stop fingerprinting, it gives a false sense of privacy. Apple doesn’t provide a mechanism to enforce what developers declare. We have seen this approach when Apple introduced Privacy Nutrition Labels. There is no mechanism to verify what developers show on their apps’ Privacy Nutrition Labels.
All in all, you should always exercise judgment and only install apps of developers you trust.
Apple has introduced a new URI scheme in iOS 17.4 to allow EU users to download and install alternative marketplace apps from websites. Once an authorized browser invokes the special URI scheme marketplace-kit, it hands off the installation request to a MarketplaceKit process that starts communicating with the marketplace back-end servers to finally install the app. As part of the installation flow, the MarketplaceKit process sends a unique client_id identifier to the marketplace back-end. Both Safari and the MarketplaceKit process allow any website to make a call to the marketplace-kit URI scheme of a particular marketplace. As a result, multiple websites can trigger the MarketplaceKit process to send the same unique identifier client_id to the same marketplace back-end. This way a malicious marketplace can track users across different websites.
Video
Background
To comply with the European Digital Market Act (DMA), Apple had to introduce a new method that allows EU users to download and install alternative marketplace apps from the developers’ websites. The marketplace developer needs to add a call to a special URI scheme to their website. The call must be triggered by an HTML button, i.e. a click event. According to Apple, this is a security measure to prevent triggering the installation process without the user’s consent.
Apple must have forgotten that this is the web, and developers can actually style HTML buttons to virtually look like anything. It’s not clear what value this security measure brings. Anyhow, the new URI scheme looks like this:
An optional authentication token to include if downloads require authorization. iOS sends the token back to your token endpoint to reference this request. The value is free-form, and can contain any information at your discretion.
account
An optional user ID for the page visitor. iOS groups apps in restore requests based on account. iOS also provides the account as login_hint for the authorization call during interactive re-authentication; for more information, see Reauthenticating a person to manage apps.
When this scheme is invoked by an authorized browser, it hands off the installation request to MarketplaceKit. Then MarketplaceKit starts an internal process that receives all the URL parameters and kick-starts the installation process. It starts by retrieving the following .well-known resource from the marketplace website:
MarketplaceKit constructs this URL by replacing the base URL with the base URL passed in the alternativeDistributionPackage parameter. Once downloaded, MarketplaceKit extracts the token_endpoint URL from the JSON structure and sends the following request to it:
As Apple documentation explains, client_id is “a value that iOS randomly generates once per marketplace, device, and account combination.” This means that client_id remains unique as long as the combination of device, Apple ID account, and marketplace remains the same. It remains the same every time the marketplace-kit scheme is invoked, even after a device restart or clearing the browser cache. In addition, MarketplaceKit relays whatever token passed to the scheme in the parameter token to the subject_token parameter in the POST request to the /oauth/token endpoint.
It is worth noting that only browsers authorized by Apple can invoke the marketplace-kit URI scheme. Browsers willing to support the new scheme have to apply for a special entitlement. At the moment, only Brave, Ecosia, and Safari support the marketplace-kit URI scheme.
Implementation Flaws
Our testing shows that Apple delivered this feature with catastrophic security and privacy flaws. First, Safari invokes the marketplace-kit URI scheme without checking the origin of the website containing the URI scheme and the URL passed in the alternativeDistributionPackage input parameter. This allows cross-site tracking as we’ll show in the next section.
Second, MarketplaceKit would accept any parameters once invoked. It doesn’t read or validate the JWT tokens passed in the argument. We are sure that Marketplace doesn’t read the tokens because we sent text that doesn’t conform to a valid JWT structure and MarketplaceKit accepted it. Worse, it blindly relayed the invalid JWT token when calling the /oauth/token endpoint. This opens the door to various injection attacks to target either the MarketplaceKit process or the marketplace back-end.
Third, certificate pinning is not deployed in the entire process. This makes it easy to intercept and manipulate requests between the MarketplaceKit process and the marketplace back-end. It might be tricky to support certificate pinning here because MarketplaceKit might communicate with many servers that can dynamically be changed by the marketplace developer in the .well-known resources. But this also has potential issues. In our testing, we overwrote the .well-known resources through intercepting the calls and we fed our own endpoints. As a result, MarketplaceKit called our endpoints.
Flaws in software are not uncommon. However, the severity of these flaws in both the design and implementation raise concerns about Apple’s entire approach to app sideloading.
Secretly Tracking Users
Our observation shows that MarketplaceKit always reacts to the input parameters passed in the scheme and send the client_id identifier to any website. It doesn’t check if the information matches a registered marketplace or not. However, we realized that when the information doesn’t match a registered marketplace, client_id keeps changing every time the URI scheme is invoked. But as long as the base URL of alternativeDistributionPackage and account input parameters match a registered app marketplace, then MarketplaceKit would always send a fixed client_id to the /oauth/token endpoint of the registered marketplace.
This makes the perfect recipe for a malicious marketplace to be able to track users across different websites. All the malicious marketplace has to do is get approved by Apple. History shows that Apple’s review process is very flawed as many scam apps continue to find their way to Apple’s App Store.
The release of the first alternative marketplace run by altstore.io has made the process clearer for us and provided a good example for experimentation.
We built a couple websites to prove our theory. Since the AltStore has already been approved by Apple, we “borrowed” their alternativeDistributionPackage URL and account name. We added the following code to an HTML button and deployed it on three different websites, namely mysk.ca, mysk.app, and mysk.io:
Now when a user visits these three websites, each website will trigger MarketplaceKit to call the marketplace endpoint and hand it the unique client_id and any custom payload passed in the token parameter. The unique client_id will enable the marketplace developer to trace all three visits to the same user. It can also share this information with the websites to personalize ads, for example.
For the script above, MarketplaceKit sent the following request to the /oauth/token endpoint of AltStore:
We used Safari on iOS 17.4.1 in private browsing mode during the test.
The sample script shown above breaks right after exchanging the unique identifier. It doesn’t run the entire flow to eventually install the app. Apple documentation states that the installation can only be started once invoked from the developer’s registered website. But the check for the website happens at a very later stage of the process.
What makes this attack perfect for trackers is that MarketplaceKit runs once the user taps on a button. It could really be any button. And it sends the unique client_id silently without the user being aware of that. And when it fails for some networking reason, it fails silently without presenting any error to the user.
This attack only works on EU iPhones. Other iPhones don’t support the marketplace-kit URI scheme.
Final Words
The flaw of exposing users in the EU to tracking is the result of Apple insisting on inserting itself between marketplaces and their users. This is why Apple needs to pass an identifier to the marketplaces so they can identify installs and perhaps better calculate the due Core Technology Fee (CTF).
Safari should protect users against cross-site tracking. It should do what Brave has done and check the origin of the website and match it against the URL passed in the alternativeDistributionPackage parameter. It shouldn’t invoke the URI scheme if the URLs don’t match. Surprisingly, Apple finds it more important to check if the scheme call came from an HTML button event than checking for cross-site invocation of the marketplace-kit URI scheme. Very Puzzling.
Moreover, we always advise developers dealing with JWT tokens to verify them before using them. Sadly, we can’t even give this advice to Apple because they don’t even try to parse the JWT tokens. So, please read the JWT tokens and make sure they are parsable, then validate them before working on the request.
Finally, EU users who want to avoid being tracked should use Brave. It’s currently the only authorized browser that blocks this type of cross-site tracking.
With Tesla’s current design, if an attacker has the email and password of a victim’s Tesla account, they can drive away with the victim’s Tesla, even if two-factor authentication is enabled. Tesla Product Security team has investigated this issue and determined that this is the intended behavior.
Video
Introduction
Phishing and social engineering attacks are not uncommon. However, an attacker who gets a hold of leaked or stolen credentials shouldn’t have it all. This post shows that Tesla doesn’t protect its users, or vehicles, against stolen credentials. Unfortunately, an attacker who somehow gets the credentials of a vehicle’s Tesla account can take control of the car and drive away with it.
The major problem with the design is that Tesla only requires the email and password of a Tesla account as well as being physically near the Tesla vehicle to activate a phone key. With an activated phone key a user, or an attacker, has full control of the vehicle. The flow doesn’t require the user to be inside the car or to use another physical factor for authentication, such as a Tesla key card or scanning a QR code that the Tesla’s touchscreen displays.
Attack Concept
We developed a hypothetical, but not far-fetch, scenario based phishing and social engineering to obtain the credentials of a Tesla account. We used a captive Wi-Fi network whose SSID looks familiar to Tesla owners. We named the Wi-Fi network “Tesla Guest” to match the guest network Tesla offers at its service centers. The Wi-Fi network displays a captive portal that is shown right after joining the network. On the captive portal, we showed a fake sign-in screen that matches the real sign-in screen on Tesla’s website.
Now, all the attacker needs to do to target a victim is to broadcast this Wi-Fi network in a place that Tesla drivers frequently visit; a Tesla Supercharger. What better place to find Tesla drivers other than a Tesla Supercharger!
Attack Preparation
Theoretically, a Tesla driver at a Tesla Supercharger would discover this Wi-Fi network and attempt to join it as it looks as if it was offered by Tesla. When joining the Wi-Fi network, the captive portal will be shown on the driver’s smartphone prompting the driver to sign in. Since this captive portal is fully controlled by the attacker, everything typed on this portal is logged and shown to the attacker in real-time. The fake sign-in screen mimics the real Tesla sign-in screen and prompts for the Tesla account email, password, and two-factor one-time passcode (OTP). As a result, the victim would enter the account details as if they were signing in on a Tesla official website, or app.
The attacker hopes that the victim would enter the correct details in order to join what seems to be a Tesla free Wi-Fi network only offered to Tesla drivers, at a Tesla Supercharger. Everything sounds legit.
For the purpose of recording this demo to illustrate this attack, we used a Flipper Zero to run the captive Wi-Fi network. We cannot stress enough that many devices are capable of doing the same. In fact, a laptop would fit this scenario the best as the attacker needs to react within 30 seconds, in case the victim’s account is protected by two-factor authentication.
As the attacker receives the Tesla account information in real-time, the attacker enters these details in the official Tesla mobile app. If the victim’s account is not protected by two-factor authentication, the attacker will successfully sign in. If the account is protected by two-factor authentication, the fake captive portal will prompt the victim to enter the one-time passcode, which is relayed in real-time to the attacker. Then, the attacker enters the OTP in the official app and successfully signs in.
At this point, the victim does not receive any push or email notification that a new sign-in to the Tesla account has been made. The session in the Tesla app already starts showing live information about the vehicle, such as precise location and state.
The Attacker is in Control
The attacker now possesses a valid session in the mobile app that allows for tracking the vehicle. However, the attacker cannot control the vehicle yet. For that, the phone key has to be activated. It turns out, this is the simplest step in the entire process. At any time, the attacker can just turn the bluetooth and location services on their smartphone, push the “Set up” button on the app, walk by the vehicle, and the phone key will be activated. And from this moment on, the attacker is in full control.
Since the attacker receives live tracking information about the vehicle, it would be too blatant to drive away with the car now at the Supercharger. The owner is still nearby. Ideally, the attacker would track the vehicle and wait for the right moment when the vehicle is left unattended.
To make things worse, the Tesla website doesn’t provide information about how many active sessions a user has. The app doesn’t show this either. Moreover, in the event that the owner navigates to the list of locks shown by the Tesla’s touchscreen and sees a strange phone key, removing that unwanted phone key doesn’t terminate the session of the app associated with the key. Thus, the attacker can still receive live information about the vehicle and can re-activate the key at any moment.
Surprisingly, removing a phone key from the Tesla’s touchscreen requires authentication with a key card. And after successfully removing a phone key, the owner receives a push notification.
Tesla Product Security Responds
We are aware that that phishing and social engineering attacks are not covered by Tesla’s Bug Bounty program.
However, the Tesla owner’s manual clearly states that a phone key cannot be added without authenticating with a key card — the RFID cards or standard keys that come with a Tesla. As shown in the scenario, the attacker was able to add a phone key without having a key card. For us, this is a clear bug.
We reported it to Tesla Product Security and highlighted the link that states the relevant information.
The key card is used to “authenticate” phone keys to work with Model 3 and to add or remove other keys.
Surprisingly, Tesla Product Security team determined that this is the intended behavior. In addition, they rejected that the card key is needed to authenticated a phone key, which is the opposite of what the owner’s manual says.
Hi Tommy,
Thanks for the report. We have investigated and determined that this is the intended behavior. The “Phone Key” section of the owner’s manual page you linked to makes no mention of a key card being required to add a phone key.
Thanks,
Tesla Product Security
Final Thoughts
We believe that this behavior is unsafe and Tesla vehicles clearly are vulnerable to phishing and social engineering attacks. We recommend the following remedies for this problem:
Tesla should make key card authentication mandatory for adding new phone keys
Tesla should notify owners when new keys are added
It’s worth mentioning that the “PIN to Drive” security feature that Tesla vehicles have wouldn’t prevent the attacker from driving the car. The phone key in the Tesla mobile app can bypass the “PIN to Drive.” Thus, the attacker can disable the PIN and drive away with the car.
Phishing and social engineering attacks are very common today, especially with the rise of AI technologies, and responsible companies must factor in such risks in their threat models. If a victim was tricked to expose their credentials, they shouldn’t lose it all. They shouldn’t lose their car.
Hopefully, this post and video will raise awareness about this topic. If you have found this post helpful, share it with friends to spread the word, especially if they are Tesla owners.
UPDATE (September 2, 2022): Added new remarks about Android 13 and comparison between Brave, Chrome, DuckDuckGo, Edge, and Firefox (Android)
UPDATE (September 1, 2022): Facebook fixed the iOS app. Now it stops monitoring the accelerometer. For the feature of “shake the phone to report a problem,” it is subscribing to an iOS shake event.
Nearly every modern smartphone is equipped with an accelerometer, which, as the name implies, is a sensor that measures acceleration. It’s most commonly used for detecting the device’s orientation. It also has many other uses, whether as a game controller in racing games, as a pedometer for counting daily steps, or to detect falls as seen in the Apple Watch. There also have been some research to develop novel accelerometer applications: estimating heart rate, breathing rate, or even as a rudimentary audio recorder using just the accelerometer. Currently, iOS allows any installed app to access accelerometer data without explicit permission from the user. Curious apps might be able to learn a lot about users through the accelerometer and without their knowledge or permission.
Videos
The Accelerometer in iOS
The iPhone is equipped with accurate accelerometer and gyroscope hardware. It can measure the altitude, rotation rate, and acceleration of your iPhone with high accuracy.
Steve Jobs demonstrated the capabilities of these two sensors during the introduction of iPhone 4.
The accelerometer and gyroscope are bundled together in iOS and are part of the Core Motion Framework. For the sake of brevity, I will just say accelerometer to refer to both sensors.
The accelerometer has tons of applications and many apps rely on it. Most users won’t realize it when their favorite apps use the accelerometer. This is simply because apps don’t need a permission to read accelerometer data. Unlike access to location services and Bluetooth, access to the accelerometer is granted to all apps on the iPhone. So apps can read measurements from the accelerometer without any restriction– except for one. Apps can only read the accelerometer when they are active in the foreground. iOS prevents apps running in the background from reading the measurements.
Apps that access resources protected by a system permission have to specify why they need such an access. Developers have to formulate the reason in a simple description that conveys the message to the users. iOS shows the description on the permission dialogue when the app requests a permission from the user. Apps that don’t provide such information for each permission they need will not be approved by Apple’s App Review team. Since access to the accelerometer is not protected by a system permission, developers are not required to inform users about why they need the access.
At first glance, accelerometer data seems to be innocuous. It’s only about moving and rotating the phone, right? Can that breach your privacy? The answer lies in the next section.
Possible Scenarios
Accelerometer measurements are collected all the time while you are holding your phone. iOS makes the measurements accessible to the app that is active in the foreground. The app may choose to ignore the measurements or read them. There’re no boundaries for what an app can do with the measurements, but here are some spooky scenarios:
Motion and Activities
Accelerometer data reflects how you hold your phone and how you move. An app can tell if you are using it while lying, sitting, walking, or cycling. The app can also count your steps. Although access to the pedometer on the iPhone is protected by a system permission, there are many sophisticated algorithms that process accelerometer data to achieve exactly that.
It is worth mentioning that the iPhone is also equipped with a barometer, a sensor that measures air pressure and altitude. The barometer is also part of the Core Motion Framework and no permission is required to access it. As a result, any app can figure out your altitude and measure air pressure in your environment. Thus, any app can tell if you are riding on a bus, train, or plane while using it.
Heart Rate
The accelerometer can detect the slight movements of your hand and body while holding the phone. Researchers can use this data to estimate your heart rate. Thus, an app can potentially know your heart rate while you are using it.
Accelerometer data doesn’t contain any location information. However, it can be used to infer your exact location based on the vibration pattern in your environment.
To illustrate this concept, consider the following example:
You are commuting to work by bus. While sitting on the bus, you open your favorite social app. Even though it is your favorite app, you don’t trust it enough to share your location with it. At the next stop, a passenger gets on the bus. The passenger sits on the bus and opens the same social app. But the passenger shares their precise location with the app. Now, if this social app is reading accelerometer data on your phone as well as the passenger’s phone, the app can easily figure out that both phones experience the same vibration pattern. Indeed, both phones are going to record the same vibrations, e.g. when the bus takes off, stops, and swerves left or right. The app now knows that you and the passenger are together in the same environment, hence same location. Don’t be surprised if you receive a recommendation from the app to add this passenger as a friend.
Audio Recorder
Sound waves generated by your phone speakers cause the phone to vibrate. As every sound makes unique vibrations, researchers were able to analyze the vibrations and work their way backwards to reconstruct the original sound.
So, if you are on a call and using the phone speaker, an app can pick up the vibrations generated by the speaker and recorded by the accelerometer. This way the app can record the call without having access to the microphone, albeit only the voice of your counterpart will be recorded.
Any Examples?
I tested several apps and checked if they read accelerometer data without a clear reason. Here are some examples:
Facebook
Facebook reads the accelerometer all the time. Facebook actually shows a support prompt if a shake event is detected across the app. This could be one reason why Facebook reads accelerometer data. The prompt has an option to switch this feature off. However, switching it off doesn’t stop the app from reading the accelerometer.
The Facebook app for iOS has stopped monitoring the accelerometer, according to testing Version 382.1. The fix might have been applied in an earlier version. The app now requests shake events from iOS to present the support sheet when the user shakes the iPhone. This is the proper way to implement this feature.
Instagram
Instagram only reads the accelerometer in DM and keeps reading it as long as the user in the DM view.
WhatsApp
WhatsApp uses the accelerometer to add a motion effect to chat wallpapers. It is enabled by default, but you can switch this effect off in settings. The app stops reading the accelerometer when the effect is off. WhatsApp is mentioned here because it is a Facebook app.
Other Apps
The following apps didn’t show any sign of reading the accelerometer for no clear reason: Facebook Messenger, Signal, Slack, Telegram, TikTok, Threema, Twitter, and WeChat.
The next section will explain how you can find out the apps that read the accelerometer.
A Little bit Technical
As mentioned earlier, it is a bit hard for users to tell if an app is reading the accelerometer, but not for developers. I used the same method that I used before in our clipboard research. Xcode provides an option to view the system logs of the iPhone.
To do that, you need to connect the iPhone to Xcode and open the iPhone console. The console displays a lot of log messages. To reduce the noise, type “accelerometer” in the search field. Now you only see processes, or apps, that read the accelerometer. The following screenshot shows the log messages displayed when Instagram reads the accelerometer.
This video illustrates the process in action:
How about Browsers?
Browsers can also access accelerometer data without a permission, just like other iOS apps. The question that you might be asking: do browsers relay accelerometer data to websites you visit?
In iOS 13, Apple introduced a permission in Safari. A dialogue prompt is presented to the user when a website requests accelerometer data. This change was triggered by a study that showed many popular websites included scripts that read accelerometer data. Since all iOS browsers are forced to use WebKit, the permission dialogue protects access to the accelerometer regardless of the browser you are using, whether Safari, Firefox or Google Chrome.
And here is a note to Android users: the same applies to Google Chrome on Android. Google Chrome on Android shares the motion sensors data with every website you visit by default. The motion sensors actually refer to the accelerometer, gyroscope, and barometer sensors. The good news is you can change the default behavior. While there are many reasons to quit Google Chrome and switch to other browsers, this accelerometer issue shouldn’t be one of them.
So, if Google Chrome is your preferred browser on your Android phone and you are not comfortable with sharing the motion sensors with websites you visit, here is how you can disable it:
With the release of Android 13, I revisited popular browsers and tested if they allow websites to access the motion sensors/accelerometer by default. It turns out that Brave is the winner here. It is the only browser on Android that blocks access to the motion sensors by default.
Google Chrome and Microsoft Edge allow access by default with the possibility to change the behavior in the settings. Surprisingly, DuckDuckGo and Firefox allow access to the motion sensors by default and both browsers don’t provide an option to disable that. This is particularly shocking because both browsers, especially DuckDuckGo, promise a huge load of privacy features.
I contacted DuckDuckGo to inquire about their decision to share the motion sensors data with all visited websites despite the potential privacy issues discussed here. I will update the blog to include their response as soon as I hear from them.
This video illustrates how you can block access to the motion sensors on Chrome and Edge, it also shows that DuckDuckGo and Firefox don’t offer an option to block the access:
The following websites allow you to test how your browser handles accelerometer access:
As of iOS 15, access to the accelerometer is open to all apps. Accelerometer data encompasses private information about you that any app can easily expose by applying the right algorithm. The rule of thumb in information security is that private information should be protected. Access to the accelerometer should be protected.
Block Contacts is a new feature in Tinder that lets users avoid certain people on the app, even if they hadn’t matched. Using this feature, a user can share with Tinder the contact information of whoever they would like to block. Tinder will then use this information to prevent blocked contacts from seeing each other on the app. We verified that the app only shares the contact info of the blocked contacts, and not the entire contact list. However, users should be aware that Tinder collects the full name, email addresses, and phone numbers of every blocked contact.
Block Contacts
Tinder is a popular online dating app. It’s the original “swipe and match” mobile dating app which every other dating app has copied been modelled after. Recently Tinder introduced a new feature called Block Contacts, which lets users avoid certain people who may be using the app. There are a variety of reasons why one would do that. For example, some don’t want to see or be seen by exes or co-workers.
So how does it work?
If you’re using Tinder and would like to block someone, all you need to do is share their contact information with Tinder. You can do this manually by entering the person’s name, email and/or phone number. Alternatively, you can grant the app access to your phone’s contacts and simply pick who you want to avoid. Now, any Tinder user who’s registered with a blocked e-mail or phone number will be hidden from you, and likewise you will be hidden from them.
It’s a nice, and perhaps necessary, feature. But has Tinder implemented it in a way that respects user privacy?
Privacy Concerns
Whenever there is handling of user information, such as contact information, many questions immediately arise. Let’s explore some of the privacy concerns about this feature.
The Entire Contact List?
Tinder offers the convenience of selecting which contacts to block from the phone’s contact list. To do that, a user needs to grant Tinder access to the phone’s contact list, which are protected by the operating system (iOS or Android). When granting such a permission, some would be concerned whether app is actually uploading their entire contact list to Tinder’s servers. This is a reasonable concern, especially since it did happen in the past. Thankfully, Tinder has assured users that the app uploads the contact information of only those selected to be blocked. Tinder also specified what information gets sent: name, email and/or phone number.
If you opt-in to the feature, we use your contact list so that you can quickly and easily select contacts you’d like to avoid on Tinder. Each time you visit Block Contacts, we’ll pull your list of contacts from your device so that you can pick who you would like to block. When you leave the feature, we’ll only keep the contact information for the people you have blocked (name, email and/or phone number). We’ll use this information to help prevent you from seeing your blocked contacts and from them seeing you (assuming they created an account with the same contact info you uploaded).
Yes, we tested the feature while watching the app’s network traffic. Doing so allows us to see exactly what the Tinder app is sending to a server when using the app. We used ProxyMan, a web debugging proxy which allows us to capture and analyze HTTPS traffic. Lucky for us, Tinder does not use certificate pinning, which makes it easier to inspect network traffic without needing to modify the app.
We used an iPhone in our test, and created several dummy contacts in the built-in Contacts app. Each dummy contact has a profile photo, full name, date of birth, multiple email addresses, phone numbers, addresses, etc. We included all these details in our dummy contacts to see what info ends up shared with Tinder.
(On a side note, we would like to thank Ronald Duck, Senior Duck Manager at Duck GmbH, who kindly volunteered to share his contact information with us for this test.)
Then we opened Tinder, went to Blocked Contacts, and gave it access to the phone’s contacts. The app listed all the contacts but only displayed the name, email addresses, and phone numbers of each contact. It left out all other information, such as profile photos and addresses. So far, the app had not uploaded any contact to any server; they were only displayed locally in the app.
Then we picked a contact and marked it as blocked. Still, no contact was uploaded. The moment we hit “save,” the app uploaded only the contact we selected. No other contacts were uploaded.
Here comes the interesting part. The app uploaded the full name, all the email addresses, and all the phone numbers of that contact. Notably, it left out all other details. This is what the Tinder app sent to the server:
Concerns for Registered Users
To sign up to Tinder, you need one phone number, one email, and a nickname. If you are a Tinder user and another user blocks you, this means there is a good chance Tinder knows your real name (or any other name you go by). Worse, if the user who blocked you keeps multiple emails and phone numbers about you in their contact list, Tinder will know that too.
Moreover, a key aspect of the Blocked Contacts feature is that blocked contacts won’t be notified when blocked. So, you won’t even know what other users uploaded about you and what information Tinder associates with your profile.
Concerns for non-Users
You don’t use Tinder? The Blocked Contacts feature can be used to block anyone’s contact information, even if they’re not associated with an active Tinder account. That means if someone blocks you, even without an account, Tinder will still store your contact information to prevent you from seeing them — in case you join Tinder in the future. There’s also no way to know if your contact information had been shared with Tinder.
Unsolicited Advice to Tinder
Block Contacts is a nice feature, but is it really effective?
If the blocked contact uses an alternative email or phone number, the feature is rendered useless. It is not uncommon that users keep secondary phone numbers and emails for use in such apps. Tinder is requesting users to upload private information for a feature that may not be effective.
To make this feature privacy-preserving, Tinder can only upload hash values of phone numbers and emails. For example, instead of uploading +1-555-555-1234, the app uploads something like this:
Moreover, the name of the contact is irrelevant as it is not required when creating a Tinder account. It can be completely dropped.
A Word to Tinder Users
Although the Block Contacts feature is practical, it comes at a privacy cost. You can still use it while protecting the privacy of your contacts. First, don’t share your contacts with Tinder. By picking a contact from the contact list, the app uploads the real name of the contact, all of their phone numbers, and all of their emails. The better option would be to enter that information manually. Tinder requires you to enter name, phone number, and an email. As Tinder doesn’t require users to enter their real names when they create their accounts, the name does not play a role here. So, just enter any name other than the real name of the contact.
Keep in mind that if the person you’re trying to block uses Tinder with a different phone number or email than what you enter, you won’t be able to block them.
Final Thoughts..
Your contact list contains personal information about people you know, and it should be handled with care. Tinder, just like other social media platforms, runs algorithms to provide targeted ads and offers. Any data you share with such platforms will eventually feed these algorithms. It’s your own decision to share your own data, but your contact list contains data that belongs to your family, friends, and colleagues.
Facebook has recently stopped generating link previews in Messenger and Instagram for users in Europe to comply with Europe’s ePrivacy Directive. In our previous post we showed that Facebook’s servers were downloading data from any link sent through Messenger or Instagram, even gigabytes in size. The change is further evidence that Facebook is using this data for purposes beyond generating link previews, as this change only applies in Europe which has some of the most robust privacy laws.
Quick Recap of Link Previews
Our previous post covered some of the technical aspects of generating link previews: the short summary and preview image shown alongside links in messaging apps. While it’s a nice feature, we showed that generating link previews in some apps can come with unexpected privacy problems. In particular, Facebook Messenger and Instagram were the only two out of all the apps we tested that downloaded the entire contents of any link and stored it on Facebook’s servers, even if the data was gigabytes in size.
You can see this in action here:
We did contact Facebook in September 2020 about what we thought could be a privacy issue (and potentially a serious bug), and they basically dismissed our concerns.
Facebook in Europe
Not long after we published our link preview post, Facebook announced in December 2020 changes to their services in Europe which disabled certain features that didn’t comply with Europe’s 2002 Privacy and Electronic Communications Directive (ePrivacy Directive). Although Facebook did not specify exactly which features were disabled, we recently discovered that link previews are no longer available for users in Europe. This even applies to users outside of Europe if they happen to be chatting with someone in Europe.
This raised an eyebrow because it is an implicit confirmation that Facebook’s handling of link previews in Messenger and Instagram did not conform to privacy regulations in Europe, otherwise they wouldn’t have disabled the feature. As we demonstrated in our videos, Facebook servers download the content of any link sent through Messenger or Instagram DMs. This could be bills, contracts, medical records, or anything that may be confidential. Stopping this service in Europe strongly hints that Facebook may be using this content for purposes other than generating previews.
Europe’s ePrivacy Directive was actually introduced in 2002 but it wasn’t applicable to messaging and calling services until December 2020. The directive includes several articles that can be relevant to the way Facebook generates link previews and may have been the reason why Facebook had to disable the feature. The articles are as follows:
Ensure that personal data can be accessed only by authorised personnel for legally authorised purposes.
Article 4:1a
In case of a particular risk of a breach of the security of the network, the provider of a publicly available electronic communications service must inform the subscribers concerning such risk and, where the risk lies outside the scope of the measures to be taken by the service provider, of any possible remedies, including an indication of the likely costs involved.
Article 4:2
Member States shall ensure that the storing of information, or the gaining of access to information already stored, in the terminal equipment of a subscriber or user is only allowed on condition that the subscriber or user concerned has given his or her consent, having been provided with clear and comprehensive information, in accordance with Directive 95/46/EC, inter alia, about the purposes of the processing.
Article 5:3
Since links may contain personal data, these articles prevent Facebook from storing, processing, or using this data without explicit consent from users in Europe. Furthermore, Facebook must clarify the purpose of processing and using the data prior to obtaining the consent.
Our videos clearly show that Facebook servers download and store the content of links sent through either app — if the same link is sent again, Facebook generates a link preview without downloading the link. This indicates that either the preview itself or the content is stored or cached.
Facebook Outside Europe
Link previews are still available in Messenger and Instagram for users outside of Europe, albeit the feature will be disabled if users happen to be chatting with someone in Europe.
Users should be aware that Facebook uses the content of links shared in the chat for purposes other than generating link previews. This actually doesn’t go against Facebook’s Terms of Service, which clearly state that any content users share through any of Facebook’s services will be used for various purposes. This section literally includes everything:
What kinds of information do we collect?
Things you and others do and provide.
Information and content you provide. We collect the content, communications and other information you provide when you use our Products, including when you sign up for an account, create or share content, and message or communicate with others. This can include information in or about the content you provide (like metadata), such as the location of a photo or the date a file was created. […]
How do we use this information?
Provide, personalize and improve our Products.
We use the information we have to deliver our Products, including to personalize features and content (including your News Feed, Instagram Feed, Instagram Stories and ads) and make suggestions for you (such as groups or events you may be interested in or topics you may want to follow) on and off our Products. To create personalized Products that are unique and relevant to you, we use your connections, preferences, interests and activities based on the data we collect and learn from you and others (including any data with special protections you choose to provide where you have given your explicit consent); how you use and interact with our Products; and the people, places, or things you’re connected to and interested in on and off our Products.[…]
https://www.facebook.com/policy
In Europe, on the other hand, the use of personal data requires explicit consent from users even if using such data is covered by the Terms of Service.
This is another video showing Facebook data-hungry servers download a 2.7 GB file 9 times:
The Bottom Line
Facebook disabled link previews for users in Europe to comply with new privacy regulations. This confirms our privacy concerns that sending links to private files in Messenger and Instagram is unsafe. While Facebook did disable link previews in Europe, users in other regions should refrain from sending links through either of these apps. The better option would be to switch to other messaging apps which respect user privacy in all parts of the world alike.
UPDATE (February 5, 2021): Facebook disabled link previews in Europe as the feature doesn’t comply with the regulations in Europe. Facebook Messenger and Instagram will no longer display link previews in chats for users in Europe.
If you enjoyed this work, you can support us by checking out our apps:
Link previews in chat apps can cause serious privacy problems if not done properly. We found several cases of apps with vulnerabilities such as: leaking IP addresses, exposing links sent in end-to-end encrypted chats, and unnecessarily downloading gigabytes of data quietly in the background.
We think link previews are a good case study of how a simple feature can have privacy and security risks. We’ll go over some of the bugs we found while investigating how this feature is implemented in the most popular chat apps on iOS and Android.
Spoiler
What are link previews?
You’ve probably noticed that when you send a link through most chat apps, the app will helpfully show a preview of that link.
Whether it’s a news article, a Word or PDF document, or a cute gif, you’ll see a short summary and a preview image inline with the rest of the conversation, all without having to tap on the link. Like so:
Sounds like a nice feature, doesn’t it? But could a simple feature like this come with a few unexpected privacy and security concerns?
Let’s take a step back and think about how a preview gets generated. How does the app know what to show in the summary? It must somehow automatically open the link to know what’s inside. But is that safe? What if the link contains malware? Or what if the link leads to a very large file that you wouldn’t want the app to download and use up your data?
Let’s go over the different approaches that an app could take to show a link preview.
Approach 0: Don’t generate a link preview 👍
This one is straightforward: Don’t generate a preview at all. Just show the link as it was sent. This is the safest way to handle links, since the app won’t do anything with the link unless you specifically tap on it.
In our testing, the apps listed below follow this approach:
Signal (if the link preview option is turned off in settings)
Threema
TikTok
WeChat
Approach 1: The sender generates the preview ✅
In this approach, when you send a link, the app will go and download what’s in the link. It’ll create a summary and a preview image of the website, and it will send this as an attachment along with the link. When the app on the receiving end gets the message, it’ll show the preview as it got from the sender without having to open the link at all. This way, the reciever would be protected from risk if the link is malicious.
This approach assumes that whoever is sending the link must trust it, since it’ll be the sender’s app that will have to open the link.
In our testing, the apps listed below follow this approach:
iMessage
Signal (if the link preview option is turned on in settings)
Viber
WhatsApp
Approach 2: The receiver generates the preview 😱
This one is bad. This approach means that whenever you receive a link from someone, your app will open the link automatically to create the preview. This will happen before you even tap on the link, you only need to see the message.
What’s wrong with this approach?
Let’s briefly explain what happens when an app “opens” a link. First, the app has to connect to the server that the link leads to and ask it for what’s in the link. This is referred to as a GET request. In order for the server to know where to send back the data, the app includes your phone’s IP address in the GET request. Normally, this would be fine if you know that you’re planning on opening the link.
But, what if an attacker wants to know your approximate location without you noticing, down to a city block?
If you’re using an app that follows this approach, all an attacker would have to do is send you a link to their own server where it can record your IP address. Your app will happily open the link even without you tapping on it, and now the attacker will know where you are.
Not only that, this approach can also be a problem if the link points to a large file, like a video or a zip file. A buggy app might try to download the whole file, even if it’s gigabytes in size, causing it to use up your phone’s battery and data plan.
Our testing did find two apps that followed this approach:
██████████████████
██████████████████████
We reported this problem to the security teams at ████████ and ██████, and we’re happy to report that both apps have been fixed before we published this blog post. (Actually, ██████ is still in the process of fixing the issue, hence their name is redacted until a fix is deployed).
Approach 3: A server generates the preview 🤔
This takes the “middle” approach, quite literally. When you send a link, the app will first send the link to an external server and ask it to generate a preview, then the server will send the preview back to both the sender and receiver.
At first glance this seems sensible. Neither the sender nor receiver will open the link, and it avoids the IP leaking problem in Approach 2.
But say you were sending a private Dropbox link to someone, and you don’t want anyone else to see what’s in it. With this approach, the server will need to make a copy (or at least a partial copy) of what’s in the link to generate the preview. Now the question is: Does the server keep that copy? If so, how long does it keep it for? What else do these servers do with this data?
This approach shouldn’t work for apps that use end-to-end encryption, where no servers in between the sender and receiver should be able to see what’s in the chat (at least in theory, anyway).
These were some of the apps that followed this approach, although they differ significantly in how their servers opened links:
Discord
Facebook Messenger
Google Hangouts
Instagram
LINE (this one actually deserves a 🤬, but we’ll get to it later)
LinkedIn
Slack
Twitter
Zoom
█████████
Digging Deeper
Now that we’ve covered the basic approaches to generate link previews, we can go over the more specific details of the risks and the privacy implications we’ve discovered. Here we’ll describe each of the risks we found in our testing:
Unauthorized Copies of Private Information
Links shared in chats may contain private information intended only for the recipients. This could be bills, contracts, medical records, or anything that may be confidential. Apps that rely on servers to generate link previews (Approach 3) maybe be violating the privacy of their users by sending links shared in a private chat to their servers.
How so? Although these servers are trusted by the app, there’s no indication to users that the servers are downloading whatever they find in a link. Are the servers downloading entire files, or only a small amount to show the preview? If they’re downloading entire files, do the servers keep a copy, and if so for how long? And are these copies stored securely, or can the people who run the servers access the copies?
Also, some countries have restrictions on where user data can be collected and stored, most notably in the European Union as enforced by the GDPR.
In our testing, apps vary widely in how much data gets downloaded by their servers. Here’s a rundown of what we found:
Discord: Downloads up to 15 MB of any kind of file.
Facebook Messenger: Downloads entire files if it’s a picture or a video, even files gigabytes in size. * 👋
Google Hangouts: Downloads up to 20 MB of any kind of file.
Instagram: Just like Facebook Messenger, but not limited to any kind of file. The servers will download anything no matter the size.* 👋
LINE: Downloads up to 20 MB of any kind of file. (This one still deserves a big 👎 as we’ll discuss later)
LinkedIn: Downloads up to 50 MB of any kind of file.
Slack: Downloads up to 50 MB of any kind of file.
Twitter: Downloads up to 25 MB of any kind of file.
Zoom: Downloads up to 30 MB of any kind of file.
████████: ███████████████████████
(👋 We did contact Facebook to report this problem, and they told us that they consider this to be working as intended.)
Though most of the app servers we’ve tested put a limit on how much data gets downloaded, even a 15 MB limit still covers most files that would typically be shared through a link (most pictures and documents don’t exceed a few MBs in size). So if these servers do keep copies, it would be a privacy nightmare if there’s ever a data breach of these servers. This is especially a concern for business apps like Zoom and Slack.
Slack, for example, has confirmed that they only cache link previews for around 30 minutes.
So that secret design document that you shared a link to from your OneDrive, and you thought you had deleted because you no longer wanted to share it? There might be a copy of it on one of these link preview servers.
Getting Servers to Download Large Amounts of Data
As we covered in the previous section, apps that follow Approach 3 will rely on servers to generate link previews. Most of these servers will limit how much data gets downloaded, since downloading too much data could in theory use up a server’s capacity and cause service disruptions.
But as we highlighted in the last section, there were two apps that stood out in our testing: Facebook Messenger and Instagram, whose servers would download even very large files.
It’s still unclear to us why Facebook servers would do this when all the other apps put a limit on how much data gets downloaded.
Crashing Apps and Draining the Battery
In Approach 1 and Approach 2, the apps will open the link to generate a link preview when sending or receiving a link. In most cases, the apps wouldn’t have to download a lot of data to show the preview, at least if done properly. The problem arises when the app puts no limit on how much data gets downloaded when generating a preview.
Let’s say someone sent you a link to a really large picture like this 1.38 GB picture of the Milky Way (if you’re using data, don’t tap on it!), a buggy app that follows Approach 2 will attempt to download the whole file on your phone, draining your battery and using up your data. This could also lead your app crashing if it doesn’t know how to deal with large files.
Before they were fixed, both █████████ and ████████ apps had this problem. Viber is still vulnerable to this problem.
Exposing IP Addresses
As we explained earlier, in order to open a link your phone has to communicate with the server that the link points to. Doing so means that the server will know the IP address of your phone, which could reveal your approximate location. Normally, this wouldn’t be much of a problem if you can avoid tapping on links you believe to be malicious.
In Approach 1, where the sender’s phone opens the link to generate the preview, the server will know the sender’s IP. This might not be a problem if we can assume that the sender trusts the link that they’re sending, since they’re the ones taking action to send a link.
Approach 2, however, is entirely unsafe. Since the receiver’s phone will be opening the link to generate the preview, the receiver’s IP will be known to the server. This would happen without any action taken by the receiver, and this can put them in danger of having their location exposed to the server without their knowledge.
Some chat apps encrypt messages in such a way that only the sender and receiver can read the messages, and no one else (not even the app’s servers). This is referred to as end-to-end encryption. Among the apps we tested, these were the ones that utilized this type of encryption:
iMessage
LINE
Signal
Threema
Viber
WhatsApp
Since only the sender or receiver can read encrypted messages and the links contained in them, Approach 3 shouldn’t be possible in these apps since it relies on having a server to generate link previews.
Well, it appears that when the LINE app opens an encrypted message and finds a link, it sends that link to a LINE server to generate the preview. We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who’s sharing which links to whom.
Basically, if you’re building an end-to-end encrypted app, please don’t follow Approach 3.
Running Potentially Malicious Code on Link Preview Servers
Most websites these days contain Javascript code to make them more interactive (and sometimes to show you ads and track you, but that’s a topic for another day). When generating link previews, no matter which of the above approaches is followed, it’s a good idea to avoid running any code from these websites, since as a service provider you can’t trust code that may be found in all the random links that get shared in chats.
We did find, however, at least two major apps that did this: Instagram and LinkedIn. We tested this by sending a link to a website on our server which contained JavaScript code that simply made a callback to our server. We were able to confirm that we had at least 20 seconds of execution time on these servers. It may not sound like much, and our code didn’t really do anything bad, but hackers canbe creative.
App Developers Respond to our Findings
Discord
Discord follows Approach 3, and their servers download up to 15 MB to generate link previews. However, we still have concerns about how long this data gets stored on their servers.
We contacted Discord to report our findings on September 19th, 2020, but we have not received a response from them.
Facebook Messenger and Instagram
Facebook Messenger and Instagram Direct Messages follow Approach 3, and since they are both owned and operated by Facebook they actually share the same server infrastructure. These servers were the only ones in our testing that put no limit on how much data gets downloaded.
To demonstrate this, we hosted a 2.6 GB file on our server, and we sent a link to that file through an Instagram DM. Since the file was on our server, we were able to see who’s downloading the file and how much data gets downloaded in total.
The moment the link was sent, several Facebook servers immediately started downloading the file from our server. Since it wasn’t just one server, that large 2.6 GB file was downloaded several times. In total, approximately 24.7 GB of data was downloaded from our server by Facebook servers.
This was so surprising to us, so we had to take a video of what we saw:
As we mentioned earlier, Facebook was given a noticed of these issues when we submitted two reports to them along with the videos on September 12th, 2020.
Google Hangouts
Google Hangouts follows Approach 3, and their servers download up to 20 MB to generate link previews. Again, there is the concern about how long this data gets stored on their servers.
We submitted a report to Google on September 16th, 2020, but we have not received a response from them.
LINE 👕👕
Even though LINE is an end-to-end encrypted chat app, they do forward links sent in a chat to an external server that generates link previews. This server also forwarded the IP addresses of both the sender and receiver to that link. 🤦♀️
We sent a report with our findings to the LINE security team. They agreed with us that their servers shouldn’t be forwarding the IP addresses of their users to generate link previews, but they still think it’s acceptable for an end-to-end encrypted chat app to use an external server to generate link previews. They have however updated their FAQ to include this information and to show how to disable link previews.
As of versions 10.18.0 for Android and 10.16.1 for iOS, the apps no longer leak IP addresses when generating link previews.
LinkedIn
LinkedIn Messages follows Approach 3, and their servers download up to 50 MB to generate link previews. But their servers were vulnerable to running Javascript code, which allowed us to bypass the 50 MB download limit. We also had concerns about how long the link preview data gets stored on their servers.
We sent a report with our findings to the LinkedIn security team on September 16th, 2020 but we have yet to receive a response from them at the time of publishing this blog post.
Slack
Slack follows Approach 3, and their servers download up to 50 MB to generate link previews. However, we are still concerned about how long this data gets stored on their servers, especially since Slack is used primarily by businesses which may be sharing sensitive or confidential links through chats and channels.
Slack reported to us that link previews are only cached for approximately 30 minutes. This is also confirmed in their documentation.
Twitter
Twitter Direct Messages follows Approach 3, and their servers download up to 25 MB to generate link previews. There is still the problem of how long this data gets stored on their servers.
We contacted Twitter and they told us that this is working as intended. They have not disclosed how long the link preview data is kept for.
Viber
Viber is end-to-end encrypted and follows Approach 1, where the sender would generate the link preview. Though we did find a bug: if you send a link to a large file, your phone will automatically try to download the whole file even if it’s several gigabytes in size.
It’s also worth mentioning that even though Viber chats are end-to-end encrypted, tapping on a link will cause the app to forward that link to Viber servers for the purposes of fraud protection and personalized ads. You can find more info about this on their support website.
Zoom
Zoom follows Approach 3, and their servers download up to 30 MB to generate link previews. However, we still have concerns about how long this data gets stored on their servers, especially since Zoom is used primarily by businesses which may be sharing sensitive or confidential links through chats.
We submitted a report to Zoom on September 16th, 2020, and they have told us that they’re looking into this issue and that they’re discussing ways to ensure user privacy.
Since we’re only two people doing this research in our spare time, we could only cover a small set of the millions of apps out there. Link previews aren’t just limited to the handful of chat apps we looked at: there are many email apps, business apps, dating apps, games with built-in chat, and other kinds of apps that could be generating link previews improperly, and may be vulnerable to some of the problems we’ve covered here.
We think there’s one big takeaway here for developers: Whenever you’re building a new feature, always keep in mind what sort of privacy and security implications it may have, especially if this feature is going to be used by thousands or even millions of people around the world. Link previews are nice a feature that users generally benefit from, but here we’ve showcased the wide range of problems this feature can have when privacy and security concerns aren’t carefully considered.
If you’re not a developer, we hope this report gives you an appreciation for the subtleties of the small differences in the same exact feature, and how these differences can have a massive impact on security and privacy.
Boring Yet Necessary Information
Here’s the table summarizing all the apps we tested and their version numbers:
The TikTok app uses insecure HTTP to download media content. Like all social media apps with a large userbase, TikTok relies on Content Delivery Networks (CDNs) to distribute their massive data geographically. TikTok’s CDN chooses to transfer videos and other media data over HTTP. While this improves the performance of data transfer, it puts user privacy at risk. HTTP traffic can be easily tracked, and even altered by malicious actors. This article explains how an attacker can switch videos published by TikTok users with different ones, including those from verified accounts.
Introduction
Modern apps are expected to preserve the privacy of their users and the integrity of the information they display to them. Apps which use unencrypted HTTP for data transfer cannot guarantee that the data they receive wasn’t monitored or altered. This is why Apple introduced App Transport Security in iOS 9, to require all HTTP connections to use encrypted HTTPS. Google has also changed the default network security configuration in Android Pie to block all plaintext HTTP traffic.
Apple and Google still provide a way for developers to opt-out of HTTPS for backwards-compatibility. However, this should be the exception rather than the rule, and most apps have made the transition to HTTPS. At the time of writing, TikTok for iOS (Version 15.5.6) and TikTok for Android (Version 15.7.4) still use unencrypted HTTP to connect to the TikTok CDN.
After a short session of capturing and analyzing network traffic from the TikTok app with Wireshark, it is hard to miss the large amounts of data transferred over HTTP. If you inspect the network packets closer, you would clearly spot data of videos and images being transferred in the clear and unencrypted.
Consequently, TikTok inherits all of the known and well-documented HTTP vulnerabilities. Any router between the TikTok app and TikTok’s CDNs can easily list all the videos that a user has downloaded and watched, exposing their watch history. Public Wifi operators, Internet Service Providers, and intelligence agencies can collect this data without much effort.
Figure 1 illustrates the network traffic as captured by Wireshark.
TikTok transports the following content via HTTP:
Videos: all videos that the app shows
Profile photos: the profile photos of TikTok accounts
Video still images: the preview image of a video that is displayed while the video is being downloaded
The captured data shows that videos are downloaded from the following domain names:
http://v19.muscdn.com
http://v21.muscdn.com
http://v34.muscdn.com
In addition, profile photos and still images are downloaded from http://p16.muscdn.com.
All the content types listed above are prone to tracking. For example, watch history can be created by capturing network traffic downloaded from http://v34.muscdn.com.
Moreover, a man-in-the-middle attack can alter the downloaded content. For example, swapping profile photos of accounts with forged photos. However, this is not as critical as swapping videos. While a picture is worth a thousand words, a video is certainly worth more. Thus, the attacker can convey more fake facts in a spam video swapped with a video that belongs to a celebrity or a trusted account.
The circulation of misleading and fake videos in a popular platform such as TikTok poses huge risks. That encouraged us to stage a man-in-the-middle attack to swap videos and demonstrate the results. The following section delves deeper into the technical details of our work.
Methodology
We prepared a collection of forged videos and hosted them on a server that mimics the behavior of TikTok CDN servers, namely v34.muscdn.com. To make it simple, we only built a scenario that swaps videos. We kept profile photos intact, although they can be similarly altered. We only mimicked the behavior of one video server. This shows a nice mix of fake and real videos and gives users a sense of credibility.
To get the TikTok app to show our forged videos, we need to direct the app to our fake server. Because our fake server impersonates TikTok servers, the app cannot tell that it is communicating with a fake server. Thus, it will blindly consume any content downloaded from it.
The trick to direct the app to our fake server is simple; it merely includes writing a DNS record for v34.muscdn.com that maps the domain name to the IP address of our fake server.
This can be achieved by actors who have direct access to the routers that users are connected to. First, a record mapping the domain name v34.muscdn.com to a fake server has to be added to a DNS server. Second, the infected routers have to be configured to use that corrupt DNS server. Now, when the TikTok app tries to look up the IP address of v34.muscdn.com, the corrupt DNS server returns the IP address of the fake server. Then, the app will send all subsequent calls to the fake server that is impersonating TikTok’s v34.muscdn.com.
Those actions can be performed by any of the following actors:
Wifi Operators: operators of public wifi networks can configure the router to use a corrupt DNS server
VPN providers: a malicious VPN provider can configure a corrupt DNS server for its users
Internet Service Providers (ISPs): Internet Service Providers such as telecom companies have full access to the internet connections of their customers. They can configure a corrupt DNS server for their customers to swap content or track user activities
Governments and intelligence agencies: in some countries governments and intelligence agencies can force ISPs to install tools that track or alter data
If you distrust any of these actors, then what you watch on TikTok may have been altered. This also applies to any internet service that uses HTTP.
Figure 2 illustrates the HTTP network traffic directed to the fake server. The highlighted row shows a video request sent by the app to the destination IP 192.168.13.2, which is the IP address of our fake server. The fake server then picks a forged video and returns it to the app which, in turn, plays the forged video to the user as shown in the demo video. Note that only video requests are directed to the fake server. Requests to download profile photos and still images are directed to the real servers, i.e. we left them unchanged as per our scenario. In contrast, Figure 1 shows a similar video request sent to the real TikTok server with the IP 92.122.188.162.
The forged videos we created present misleading information about COVID-19. This illustrates a potential source of disseminating misinformation and false facts about a contemporary critical topic.
As shown in the demo video and Figures 3-6, the forged videos appeared on popular and verified accounts like @who, @britishredcross, @americanredcross, @tiktok, @lorengray, and @dalia. (@lorengray has over 42 million followers and 2.3 billion likes)
To recap, only users connected to my home router can see this malicious content. However, if a popular DNS server was hacked to include a corrupt DNS record as we showed earlier, misleading information, fake news, or abusive videos would be viewed on a large scale, and this is not completely impossible.
Conclusion
The use of HTTP to transfer sensitive data has not gone extinct yet, unfortunately. As demonstrated, HTTP opens the door for server impersonation and data manipulation. We successfully intercepted TikTok traffic and fooled the app to show our own videos as if they were published by popular and verified accounts. This makes a perfect tool for those who relentlessly try to pollute the internet with misleading facts.
TikTok, a social networking giant with around 800 million monthly active users, must adhere to industry standards in terms of data privacy and protection.
UPDATE (JUNE 30, 2020): The list of apps in the original report from March 2020 is NOT an exhaustive list. We examined a sample of popular apps, and listed the ones that exhibited the behavior of excessive clipboard access. Many apps have been updated since then. In light of that, we tested the apps again. The apps that stopped reading the clipboard are crossed out.
If you enjoyed this work, you can support us by checking out our apps:
This article provides an investigation of some popular apps that frequently access the pasteboard without user consent. These apps range from popular games and social networking apps, to news apps of major news organizations. We found that many apps quietly read any text found in the pasteboard every time the app is opened. Text left in the pasteboard could be as simple as a shopping list, or could be something more sensitive: passwords, account numbers, etc.
Introduction
Apps on iOS and iPadOS have unrestricted access to the system-wide general pasteboard, also referred to as the clipboard. The potential security risks of this vulnerability have been thoroughly discussed in a previous article: Precise Location Information Leaking Through System Pasteboard. We have explored popular and top apps available on the App Store and observed their behaviour using the standard Apple development tools. The results show that many apps frequently access the pasteboard and read its content without user consent, albeit only text-based data.
The apps we chose in this investigation belong to various App Store categories, e.g. games, social networking, and news. As we described in our pervious article, the severity of the pasteboard vulnerability is greatest when popular and frequently-used apps exploit it. Thus, we targeted a variety of popular apps we found on the top lists of the App Store.
Methodology
Apple provides Xcode and Xcode Command Line tools for developers to build apps for iOS, iPadOS, and macOS. We used these official tools to monitor and analyze the behavior of apps installed on our iOS and iPadOS devices. The method is simple: Once we connect and pair the devices with Xcode, we can read the system log of the device. Fortunately, all pasteboard events are clearly logged. Figure 1 shows an example of the system log output when the Fox News app is opened. The following explains the key information in the log output:
The logs output all events, and is filtered by the keyword “pasteboard”
The highlighted event in Figure 1 shows when the Fox News app requested access to the pasteboard with ID com.apple.UIKit.pboard.general. This is the ID of the system-wide pasteboard
BundleID com.foxnews.foxnews is the ID that uniquely identifies the Fox News app on the App Store
The event message that starts with “Loading item …” in Figure 2, indicates that the app has read the content of the pasteboard.
The type public.utf8-plain-text indicates that the content that the app has read is text.
This method can be performed by any iOS or Mac developer.
Criteria
We include any app that requests and reads the content of the system-wide pasteboard every time it’s opened, and consider it to be highly suspicious. There are games and apps that do not provide any UI that deals with text, yet they read the text content of the pasteboard every time they’re opened.
Every app that is popular or on a top list according to the App Store rankings qualifies to be part of this investigation. However, we picked a diverse collection of apps to provide proof that such a suspicious practice of snooping on the pasteboard exists in many apps.
There is a considerable number of apps that only read the content of the pasteboard on launch. That is, the app reads the pasteboard only when it is opened for the first time. The next time it reads the pasteboard again is when the app is quit and relaunched. Although such a behavior is also suspicious, we decided to exclude such apps and focus on the ones that access the pasteboard more frequently.
As noted in our previous article, an app that accesses the pasteboard can also read what has been copied on a Mac if Universal Clipboard is enabled.
Findings
While unrestricted access to the pasteboard allow apps to read any data type, all the apps we investigated for this article have only requested access to text data. In other words, they are only interested in reading text and ignore other data types that may have been copied to the pasteboard, such as photos and PDF documents. Surprisingly, none of the widgets that were tested accessed the pasteboard.
Our findings only documented apps that read the pasteboard every time the app is opened. However, apps can delay snooping on the pasteboard until some time or event takes places (e.g. signing up), hence are not included in our findings.
List of Apps
This section summarizes the list of apps that snoop on the pasteboard every time the app is opened. The apps are listed alphabetically in the following format:
App Name — BundleID
UPDATE (AUGUST 16, 2020): More apps crossed out *
UPDATE (JUNE 30, 2020): The list of apps in the original report from March 2020 is NOT an exhaustive list. We examined a sample of popular apps, and listed the ones that exhibited the behavior of excessive clipboard access. Many apps have been updated since then. In light of that, we tested the apps again. The apps that stopped reading the clipboard are crossed out.
We thank developers who updated their apps to fix this privacy issue.
News
ABC News — com.abcnews.ABCNews
Al Jazeera English — ajenglishiphone
CBC News — ca.cbc.CBCNews
CBS News — com.H443NM7F8H.CBSNews
CNBC — com.nbcuni.cnbc.cnbcrtipad *
Fox News — com.foxnews.foxnews *
News Break — com.particlenews.newsbreak *
New York Times — com.nytimes.NYTimes *
NPR — org.npr.nprnews
ntv Nachrichten — de.n-tv.n-tvmobil
Reuters — com.thomsonreuters.Reuters
Russia Today — com.rt.RTNewsEnglish *
Stern Nachrichten — de.grunerundjahr.sternneu
The Economist — com.economist.lamarr *
The Huffington Post — com.huffingtonpost.HuffingtonPost *
The Wall Street Journal — com.dowjones.WSJ.ipad *
Vice News — com.vice.news.VICE-News *
Games
8 Ball Pool™ — com.miniclip.8ballpoolmult
AMAZE!!! — com.amaze.game
Bejeweled — com.ea.ios.bejeweledskies
Block Puzzle— Game.BlockPuzzle
Classic Bejeweled— com.popcap.ios.Bej3
Classic Bejeweled HD— com.popcap.ios.Bej3HD
FlipTheGun — com.playgendary.flipgun
Fruit Ninja — com.halfbrick.FruitNinjaLite *
Golfmasters — com.playgendary.sportmasterstwo
Letter Soup — com.candywriter.apollo7
Love Nikki — com.elex.nikki
My Emma — com.crazylabs.myemma
Plants vs. Zombies™ Heroes — com.ea.ios.pvzheroes
Pooking – Billiards City — com.pool.club.billiards.city
PUBG Mobile — com.tencent.ig
Tomb of the Mask — com.happymagenta.fromcore
Tomb of the Mask: Color — com.happymagenta.totm2
Total Party Kill — com.adventureislands.totalpartykill
Pigment – Adult Coloring Book — com.pixite.pigment *
Recolor Coloring Book to Color — com.sumoing.ReColor
Sky Ticket — de.sky.skyonline *
The Weather Network — com.theweathernetwork.weathereyeiphone *
Conclusion
Access to the pasteboard in iOS and iPadOS requires no app permission as of iOS 13.3. While the pasteboard provides the ease of sharing data between various apps, it poses a risk of exposing private and personal data to suspicious apps. We have investigated many popular apps in the App Store and found that they frequently access the pasteboard without the user being aware. Our investigation confirms that many popular apps read the text content of the pasteboard. However, it is not clear what the apps do with the data. To prevent apps from exploiting the pasteboard, Apple must act.
Media Coverage
This article was well-received in social media and has been covered by several tech websites. The following list provides links to the coverage:
UPDATE (JUNE 22, 2020): Apple addressed this vulnerability in iOS 14 and iPadOS 14 by showing a notification every time an app reads the clipboard.
Disclaimer: We submitted this article and source code to Apple on January 2, 2020. After analyzing the submission, Apple informed us that they don’t see an issue with this vulnerability.
If you enjoyed this work, you can support us by checking out our apps:
iOS and iPadOS apps have unrestricted access to the systemwide general pasteboard. A user may unwittingly expose their precise location to apps by simply copying a photo taken by the built-in Camera app to the general pasteboard. Through the GPS coordinates contained in the embedded image properties, any app used by the user after copying such a photo to the pasteboard can read the location information stored in the image properties, and accurately infer a user’s precise location. This can happen completely transparently and without user consent.
Terminology
To clearly address the exploit described in this article, this section defines a few terms to bring the reader to the same context as intended by the authors.
User: a user using an Apple device running the latest iOS or iPadOS version, Version 13.3, at the time of writing.
Pasteboard: the systemwide general pasteboard available in iOS and iPadOS that is used for copy-cut-paste operations. It is identified with the general constant.
Malicious app: an app that repeatedly reads the content stored in the pasteboard to collect data about the user.
Camera app: the built-in camera app that is pre-installed on iOS and iPadOS
Current device: the Apple device used by the user and has a malicious app installed on it.
Vulnerability window: the timeframe during which an app can read the pasteboard.
Affected Apple Products
All Apple devices running the latest version of iOS and iPadOS — Version 13.3, at the time of this writing.
Description
Apple has designated a special permission for accessing GPS information from an Apple device. Apps can only access location information if the user has explicitly granted such access. An average user assumes that apps cannot know their location unless the location services permission is granted. However, an app can infer a user’s location without requesting that from the user — by analyzing the geolocation of the user’s IP address, for example. Fortunately, this method does not provide an intrusive app a high degree of accuracy. The pasteboard can potentially provide such intrusive apps with what they are going after — precise user location without the user’s consent.
Once a user grants the Camera app access to the location services, which normally happens when the user opens the Camera app for the first time, the Camera app adds precise GPS information to every photo the app takes. GPS information is part of the image metadata that iOS and iPadOS store in every photo. Developers can read these properties using the Image I/O Framework API CGImageSourceCopyProperties. Thus, it is a trivial task for developers to extract GPS information from a photo.
The aforementioned facts lay the ground for a potential exploit of precise location information that can be utilized by unauthorized apps. The following three steps have to be fulfilled for the vulnerability window to open:
The user grants the Camera app access to the location services
The user takes a photo using the Camera app
The user copies the photo to the pasteboard
An average user is very likely to have performed all three steps. Copying photos from the Photos app is an increasingly common practice. As a result, the likelihood that a user has left out a photo stored in the pasteboard is alarmingly high. With that, the user has exposed their precise location information to any app that is used after this point of time, regardless of whether the app is granted access to location services or not.
A malicious app that constantly reads the content of the pasteboard can easily abuse this data. An average user is not aware that precise location information about the photo is stored in the photo itself, hence unaware of the privacy breach.
In addition to the GPS location stored in the photo, the malicious app can read the timestamp of the photo, the device model that shot the photo, and its operating system. With simple math, the malicious app can compare the values read from the photo properties and compare them to the corresponding actual values of the current device. Eventually, the malicious app can infer with great confidence whether the photo in the pasteboard was shot by the current device, hence the extracted location information belongs to the current device or user. In addition, the timestamp stored in the photo supplies the temporal property of the leaked location information.
iOS and iPadOS are designed to allow apps to read the pasteboard only when apps are active in the foreground. However, there are other techniques a malicious app can implement in order to increase the likelihood the app can read the pasteboard. As we will discuss later in the demonstration app, a widget extension can read the pasteboard as long as it is visible in the Today View. As a result, a widget placed on top of the Today View can read the pasteboard every time the user swipes to the Today View, hence expanding the vulnerability window. On iPadOS, a user can configure the Today View to be always visible on the home screen, allowing malicious app widgets more time and frequency to access the pasteboard.
Impact
Several apps made it to the news recently with links to organizations notorious for compromising user privacy. Unfortunately, some of the apps were very popular in some countries. If such malicious apps relied on reading user location from photos left in the pasteboard as described in this article, enough data may have already been harvested to put people’s lives in danger.
Remedies
Apps should not have unrestricted access to the pasteboard without user’s consent. The best fix for this exploit is by introducing a new permission that enables the user to grant access to the pasteboard by app, just like contacts, location services, and photos, etc.
Alternatively, the operating system can only expose the content of the pasteboard to an app when the user actively performs a paste operation.
As a quick fix, the operating system can delete location information from photos once they are copied to the pasteboard.
Demonstration App
To illustrate the pasteboard vulnerability, we developed KlipboardSpy – a sample app that reads the pasteboard every time it enters the foreground. If a photo with GPS information is detected in the pasteboard, the app stores the photo properties. The app lists all saved photo properties in a table view. A detailed view is provided to show the properties extracted from each photo.
The KlipboardSpy.swift file shows sample code to read and store photo properties. Namely, the readClipboard() method is called every time the app becomes active. It reads the contents of the pasteboard and if it contains a photo, the method parses its properties and looks for GPS location information. If found, it will then persist all the properties in a Core Data store.
To maximize access to the pasteboard, we added a widget extension to the app. The widget extension increased the likelihood to access the pasteboard considerably. The viewDidAppear(_:) method is called every time the widget is shown in the Today View, making it perfectly suited for the readClipboard() method. Moreover, we added an App Group so that the widget can share captured pasteboard content with our app.
After both targets, namely KlipboardSpy and KlipSpyWidget, of the Xcode project are built and run successfully on a device, open the Photos app and copy an image with GPS information stored in its metadata. Then either open the app or swipe to open the Today View and scroll to make KlipSpyWidget visible on the screen (Figure 3). That’s it. The GPS location has been captured. Open KlipboardSpy. A new row for the captured content is now shown (Figure 1). Tap on the cell to navigate to the detailed view to inspect all the properties that the app captured from the pasteboard. Repeat by copying another photo, and so on.
The following table describes the fields that appear in the detailed view (Figure 2):
Field Name
Description
GPS Latitude
The latitude coordinate as extracted from the photo in the pasteboard
GPS Longitude
The longitude coordinate as extracted from the photo in the pasteboard
Image Time
The timestamp when the photo was taken as extracted from the photo in the pasteboard
Image Device
The device model of the device used to take the photo as extracted from the photo in the pasteboard
Current Device
The device model of the device that is running KlipboardSpy
Image OS Version
The OS version of the device used to take the photo as extracted from the photo in the pasteboard
Current OS Version
The OS version of the device that is running KlipboardSpy
The malicious app can infer the credibility of the location information extracted by matching the following properties:
Photo Property
Device Propery
Note
Image Time
Current Time
A match is not necessary, just to infer how old the coordinates are
Image Device
Current Device
Image OS Version
Current OS Version
A match will not occur for old photos that were taken with an older OS version
Related Vulnerabilities
This article focuses on exploiting leaked location information from the pasteboard. We consider this leak very critical as it gives away precise location information without user’s consent. Exposing such precise location information can be life-threatening in some parts of our world. Having said that, unrestricted access to the pasteboard can lead to other personal data breaches.
A malicious app that actively monitors the pasteboard can store any content it finds in the pasteboard. Content ranges from contacts, photos, phone numbers, emails, IBAN bank information, URLs, PDFs of official documents, audio files, word documents, spreadsheets, to passwords. Users are always oblivious to what they might have left stored in the pasteboard. Sensitive data may reside unnoticed in the pasteboard for an extended period of time, making it vulnerable to such exploits.
Abuse of content in the pasteboard is not only restricted to read access, an app can maliciously alter the content of the pasteboard. For example, a malicious app can alter the image properties of photos in the pasteboard to make it look as though it was taken by a particular non-Apple device. Malicious apps can bond together and add some specific data to the metadata of an image that they only understand, i.e. communicating by the general pasteboard. A malicious app could detect the presence of an IBAN bank information in the pasteboard, it can quietly replace it with some other IBAN, hoping that the user makes a bank transfer to that IBAN. Malicious scenarios are countless.
With the introduction of the Universal Clipboard some years ago, a malicious app can also read data from macOS pasteboard, expanding the reach of the app and opening a new horizon of endless malicious scenarios.
Conclusion
The pasteboard in iOS and iPadOS offers a convenient method to share content between different apps. Users are often oblivious to the fact that content stored in the pasteboard is not only accessible to apps they intend to share data with, but to all installed apps. As demonstrated in this article, this false assumption opens doors to a series of malicious practices that seriously compromise users’ personal data. We developed KlipboardSpy to demonstrate a scenario that malicious apps can exploit to gain access to precise user location information simply by reading the GPS properties of a photo left in the pasteboard. As the pasteboard is designed to store all types of data, the exploit is not only restricted to leaking location information. By gathering all these types of content, a malicious app can covertly build a rich profile for each user, and what it can do with this content is limitless.
Acknowledgements
The photos featured in the demo videos of KlipboardSpy are courtesy of Dr. Christian Knirsch
Media Coverage
This article was well-received in social media and has been covered by several tech websites. The following list provides links to the coverage:
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.