Apple Enforces New iOS Privacy Policy: Will Security and Transparency Hurt Sales Networks?

Subscribe To Download This Insight

1Q 2021 | IN-6036

Apple’s new privacy policy for the iOS marketplace has developers split into two camps: “security versus marketing.” However, the ripple effect will be more far-reaching than initially thought and may even extend to the spheres of citizen data protection, surveillance operations, and online sales networking strategies.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Key Points in Apple's New Privacy Policy

NEWS


On December 8th, Apple put into effect a new iOS privacy protocol forcing all third-party app developers to disclose detailed information regarding their security options, privacy policies, and data collection processes. This includes all information collected from app dev tools, SDKs, analytics and web tools, advertising and marketing tools, and any other third-party source that was involved in the development process. This link outlines Apple’s new policy, which states, among other requirements, that:

  • “Apps must respect the user’s permission settings and not attempt to manipulate, trick, or force people to consent to unnecessary data access.”
  • “Developers must get permission via the app tracking transparency prompt for data collected in the app and used for tracking.”
  • “Per the Developer Program License Agreement, you may not derive data from a device for the purpose of uniquely identifying it.”

While from a digital security side these new requirements are expected to greatly increase the overall security posture of the Apple app marketplace and further enhance end-user data protection by a noticeable degree, the knee-jerk reaction of certain companies was to accuse Apple of severely compromising all related sales networks and marketing processes, with Facebook championing this side of the rhetoric. At first glance, this seems like a standard “pro or con” discourse regarding end-user marketing identifiers, but it actually goes much further. What does it really mean from a big-picture perspective?

From the FBI and the U.S Congress to CCPA and GDPR

IMPACT


First, a little background to help paint a better picture. Digital security concerns in app stores, Apple privacy issues, Google Chrome extensions, and the Facebook platform is nothing new in the consumer cybersecurity sphere. In many cases, diminishing privacy operations have received global attention involving elaborate court hearings with Facebook, the FBI and DoD with Apple for consumer biometrics, and the European Union fining Google over privacy and advertising, while other companies like Intel are asking Congress to increase privacy and security measures and having the companies deny access to governmental and law enforcement agencies as a means to protect consumer data.

As such, looking at the bottom line, there are two prevailing aspects that drive the tech giants’ strategies. The one that is easy to spot is advertising, marketing, and monetization. The second one that dives deeper is data gathering, intelligence, and surveillance operations. Bills and regulations like the California Consumer Protection Act (CCPA) in the United States, which affects all company databases, website, and app data, and the General Data Protection Regulation (GDPR) in the European Union, which targeted a great deal of digitized assets, look to protect consumer’s information against such practices. However, both advertising and intelligence operations will still persist because they are inexorably tied to most, if not all, digital platforms.

Apple’s decision will certainly provide greater insight for consumers over when, how, and by whom their personal data is being handled. It will provide Apple an additional arrow in its consumer privacy quiver. Examining the online and social media posts, user agreement updates, and discourse originating between Apple, Google, and Facebook, one cannot fail to spot Facebook’s concerns revolving around advertising and monetization, thus re-targeting and re-evaluating Apple’s decision, which is said to diminish its reach, audience network, and marketing operations by effectively having the consumers choose whether they would like a company to keep track of their personal information and Identification for Advertisers (IDFA).

"Counterintuitive in the Short Term, and a Threat for IoT Security in the Long Term"

RECOMMENDATIONS


Does that mean that Facebook is afraid that consumers might gaze into a veritable abyss of covert marketing operations taking place without their knowledge or consent, which might perhaps force them to re-revisit the fine print lettering in the user agreement? Or perhaps the firm is worried that it will cause serious hurdles for developers when rolling out new features or upgrading existing ones?

Certainly, the latter reason is correct to a certain extent for many tech companies: creating more transparent platforms can be challenging, especially if they were not originally designed in a manner that allows for greater detail for end users and with security options already part of the initial development process. The former reason is also a vital one for many companies and developers that rely on ads as their primary monetization model. However, if obfuscating privacy details does not seem to provide the required effect, then perhaps a transparent solution where consumers can have “greater visibility” into the services that siphon their own personal data is the way to go.

On the other hand, Facebook states that Apple’s new policy will directly harm new businesses if users are to be informed and provide their consent regarding data, user tracking, and marketing purposes because it might negatively affect their choice of apps and online services. The fear is that users might not opt for a certain platform if they are aware of which services track them. This argument, however, seems counterintuitive at best in the short term, and a threat for IoT security in the long term. For example, consumer smartphone apps can also gain access and gather data from their smart home, home automation devices and virtual assistants, connected vehicle data, payment and transaction information, healthcare and telemedicine data, geolocation, travelling and transportation, enterprise workforce patterns from BYOD applications, or any number of behavioral analytics, to name just a few. To do this without consumer knowledge or consent would be detrimental to the security of those other IoT applications.   

Boosting data security and privacy operations during app development and rollout phases would increase the security posture from more sophisticated cyberwarfare and cybersurveillance operations by minimizing the reach and effect of fraudulent or infected apps. In some cases, this may even help to limit the reach of state-sponsored cyberattacks.

Zero-days attacks, software vulnerabilities, third-party obfuscated involvement, consumer data protection issues, spyware, and adware have played a big part in rolling out new security updates not just for Android and iOS but for most digital platforms.  Providing such a small, albeit noticeable, boost in security operations, transparency, and consumer data protection should not be labelled a “disruption.” Rather, it should be referred to as a necessary and vital “feature” for all future app development processes.

 

Services