Posts tagged Security

Facebook Apparently Terrified by Default Privacy

:: Security, Privacy

By: John Clements

Based on their full-page advertisements in various newspapers, Facebook is extremely upset about Apple’s upcoming move to make cross-app privacy the default. Their two main “hooks” are that it will hurt small businesses, and make the internet less “free”. I believe that neither of these is the case.

What is clear to me is that this move will hurt Facebook’s ability to collect piles of information about its users, and to leverage that to collect huge amounts of advertising money.

The Current Situation

Before I go on, a caveat here: while I’m a programmer and a researcher, I am not a security and privacy researcher, so you should check the things I’m telling you.

With that said, a brief summary of the current situation: Currently, Apple’s iOS includes a notion of an “IDFA”, an “identity for advertisers.” This IDFA can be seen by apps, and apps can send information about what a user is doing in this app back to its central servers. This information can then be connected to other information uploaded by other apps in order to build a detailed picture of you.

This allows Facebook and others to understand what you’re like, and how best to separate you from your money by presenting you with advertising that’s likely to convince you to buy things. It also allows Facebook to identify characteristics that will allow them to present content that you will be really really interested in.

Back in June 2020, at their developer’s conference, Apple announced a change to the way that IDFAs would work. Since this is a developer conference, they were addressing app developers, and telling them how they would use their new framework to observe a user’s IDFA:

"To ask for permission to track a user, call the AppTrackingTransparency framework, which will result in a prompt that looks like this.

opt-in tracking dialog

opt-in tracking dialog

This framework requires the NSUserTrackingUsageDescription key to be filled out in your Info.plist.

You should add a clear description of why you’re asking to track the user.

The IDFA is one of the identifiers that is controlled by the new tracking permission.

To request permission to track the user, call the AppTrackingTransparency framework. If users select “Ask App Not to Track,” the IDFA API will return all zeros."

So, which button would you click?

My guess is that relatively few people will click the “Allow Tracking” button.

This will mean that Facebook and other apps will not be able to observe your IDFA, and will not be able to use this extremely convenient way to gather and share information about you.

Small Businesses

What does this mean for small businesses?

Let me offer another caveat here, and tell you that I am not an expert on small businesses, their use of targeted advertising, and the effectiveness of their use of targeted advertising.

However, I’m going to observe that small businesses, by and large, are hurt by online advertising in general, and are unlikely to have the kinds of data science teams that are best positioned to sift through and profit from reams of consumer data.

Some (like Facebook?) might respond that their interfaces are designed to make high-quality data analytics available to even businesses that don’t have giant data science teams. I don’t think this argument holds much water, and I think that whenever Facebook provides better information on their users, companies with high data-science ability will be the ones best positioned to capitalize on it.

Facebook makes the specific claim that “Facebook data shows that the average small business owner stands to see a cut of over 60% in their sales for every dollar they spend.” This claim is (I claim) fairly preposterous; I’m guessing that they’re measuring the relative effectiveness of targeted and non-targeted advertising. What this ignores is the fact that people are still going to buy things, so that if targeted advertising goes away, more dollars will go toward non-targeted advertising.

Of course, they might be right, in the situation that the stuff that people are buying because of targeted advertising is worthless junk that they didn’t need. In that case, maybe targeted advertising is really important because it allows advertisers to sell us worthless junk. I’m not sure that’s a good argument for allowing targeted advertising.

The Free Internet

Next, Facebook makes the argument that this move is in opposition to the “free” internet. Specifically, they argue that “Many [sites] will have to start charging you subscription fees or adding more in-app purchases, making the internet much more expensive and reducing high-quality free content.”

This paragraph makes it clear that Facebook’s definition of “free” here is “free as in beer”, not “free as in speech”. That is, Apple’s change will not reduce the liberty of the internet. It will simply make it harder for sites to pay for their content-generation using targeted advertising.

Again, I think this argument doesn’t hold water, for the same reason given in the case of small businesses; unless the items being advertised are completely useless, shifting from targeted to untargeted advertising will simply level the advertising playing field, and make it harder for large companies to leverage their data science teams to separate you from your money.

However, a reduction in targeted advertising will make one huge difference; it will mean that small businesses will be more likely to work directly with the sites that their projected customers will use, rather than giving a huge slice of their money to middlemen like Facebook that mediate and control access to the targeted audiences. If I think that my customers read the Wall Street Journal, I will contact the Wall Street Journal and place an ad. This may sound like business as usual, but it’s absolutely terrifying to advertising middlemen whose current value is in holding all of the information about customers, and placing advertisements directly in the pages of those who are likely to click on them.

In Summary…

To summarize: tracking is still possible after this move; it’s just that users must opt into it, rather than having it on by default.

And here’s the thing. If you’re as terrified as this by the idea that users might have to explicitly consent to your practices… maybe you’re not actually the good guy.

Or, as That Mitchell and Webb Look put it: “Hans … are we the baddies?”

Further reading:

Mozilla campaign to thank Apple

Apple Developer Conference Video (with transcript)

Forbes article on the proposed change

EFF opinion on advertiser IDs (both Google’s & Apple’s)

facebook full-page ad

facebook full-page ad

my root password? oh sure, here it is.

:: Mac, Security, Programming Languages

By: John Clements

So, I just bought a copy of Screenflow, an apparently reputable piece of screencasting software for the mac. They sent me a license code. Now it’s time to enter it. Here’s the window:

Hmm… says here all I need to do is… enter my admin password, and then the license code.

Wait… what?

Why in heaven’s name would Screenflow need to know my admin password here?

My guess is that it’s because it wants to put something in the keychain for me. That’s not a very comforting thought; it also means that it could delete things from my keychain, copy the whole thing, etc. etc.

This is a totally unacceptable piece of UI. I think it’s probably both Apple’s and Telestream’s fault. Really, though, I just paid $100 and now I have to decide whether to try to get my money back, or just take the chance that Telestream isn’t evil.

Molis Hai: generating Passwords using Charles Dickens

:: Racket, Security

By: John Clements

TL;DR: Molis Hai

Randomly generated passwords:

  • dMbcGp=A(
  • 9eMRV7N%[
  • R]eJxx68v
  • GVUN#ek5z
  • ms8AG09-h
  • sVh2TT4wx
  • Y]sa7-b(f
  • BOrnNGLqk

More randomly generated passwords:

  • wargestood hury on,
  • wealenerity," stp
  • twould, aftilled himenu
  • Whaideve awasaga
  • andir her hing ples. F
  • spe it humphadeas a
  • to and ling, ace upooke,
  • Mr. Syd, why.’ tred. "D

Yet More randomly generated passwords

  • brothe aponder and," reasun
  • ther atternal telle is be
  • his me, he foundred, id
  • allant our faces of rai
  • time! What it of vail
  • sourned," reate." Manybody.
  • they would reck," read-doom
  • raise thack ther meant,

Which of these look easiest to remember?

All three of these sets of passwords are randomly generated from a set of 2^56; they’re all equivalently secure. The second ones and the third ones are generated using markov models built from the text of Charles Dickens’ A Tale Of Two Cities, where transitions are made using Huffman Trees.

The secret sauce here is that since traversing a Huffman tree to a common leaf requires fewer bits than traversing that same tree to reach a deep leaf, we can drive the generating model using a pool of bits, and use varying numbers of bits depending on the likelihood of the taken transition.

This means that there’s a 1-to–1 mapping between the sequences of bits and the corresponding English-like textual fragments, thus guaranteeing the security of the passwords (or, more precisely, reducing it to the problem of generating a cryptographically secure sequence of bits, which many smart people have thought hard about already).

Another reasonable way to describe this process is that we’re just “decompressing” randomly generated crypto bits using a model trained on Dickens.

The only difference between the second and third pools is that the second one uses a 2nd-order markov model—meaning that the choice of a letter is driven by the prior 2—and that the third one uses a 3rd-order model, resulting in more Dickensian text—but also in longer text.

Naturally, you can push this further. When you get to a 5th order model, you get passwords like this:

  • not bitter their eyes, armed; I am natural
  • me. Is that. At fire, and, and—in separable;
  • reason off. The nailed abound tumbril o
  • and many more." “See, that,” return-
  • falls, any papers over these listen
  • do you, yes." "I beg to takes merc
  • paper movement off before," said, Charles," rejoin
  • that. She—had the season flung found." He o

Much more Dickensian, much longer. Same security.

You can try it out yourself; Molis Hai contains a small JS implementation of this, and a canned set of 2nd-order trees.

Please note that there’s nothing secret about the model; we’re assuming that an attacker already knows exactly how you’re generating your passwords. The only thing he or she is missing is the 56 bits you used to generate your password.

For a more carefully written paper that explains this a bit more slowly, see the preprint at ArXiv.

Naturally, you can use any corpus you like. I tried generating text using a big slab of my own e-mails, and aside from a serious tendency to follow the letter “J” with the letters “o”, “h”, and “n”, I didn’t notice a huge difference, at least not in the 2nd-order models. Well, actually, here’s an example:

  • 0.91, Also: Zahid We rigor
  • argustorigoring tent r
  • Myrics args foling") (
  • can’s fortalk at html-unds
  • having avaScript" 0.88489232B
  • John? I doe.cal fluore let a
  • botheird, creally, there thic
  • to ind [(solutell wil

It’s probably true that Charles Dickens wasn’t quite so likely to type “avascript” as I am. Or “html”.

To read the Racket code I used to generate the models, see github.

And for Heaven’s sake, let me know about related work that I missed!