Composable moderation/custom labeling and custom algorithmic feeds are two things that Mastodon doesn’t have that Bluesky does.
Composable moderation/custom labeling and custom algorithmic feeds are two things that Mastodon doesn’t have that Bluesky does.
Isn’t the main problem that most people don’t use the E2E encrypted chat feature on Telegram, so most of what’s going on is not actually private and Telegram does have the ability to moderate but refuses to (and also refuses to cooperate)?
Something like Signal gets around this by not having the technical ability to moderate (or any substantial data to hand over).
Before people can be persuaded to use them, we have to persuade or force the companies and sites to support them.
A multi-billion dollar social media company sued an ad industry group that was trying to have help companies have some kind of brand safety standards to prevent a company’s ads from appearing next to objectionable content. They reportedly had two full-time staff members. This isn’t some big win, it’s bullying itself.
Basically with passkeys you have a public/private key pair that is generated for each account/each site and stored somewhere on your end somehow (on a hardware device, in a password manager, etc). When setting it up with the site you give your public key to the site so that they can recognize you in the future. When you want to prove that it’s you, the website sends you a unique challenge message and asks you to sign it (a unique message to prevent replay attacks). There’s some extra stuff in the spec regarding how the keys are stored or how the user is verified on the client side (such as having both access to the key and some kind of presence test or knowledge/biometric factor) but for the most part it’s like certificates but easier.
Don’t most DoH resolversl settings have you enter the IP (for the actual lookup connection) along with the hostname of the DoH server (for cert validation for HTTPS)? Wouldn’t this avoid the first lookup problem because there would be a certificate mismatch if they tried to intercept it?
With a breach of this size, I think we’re officially at the point where the data about enough people is out there and knowledge based questions for security should be considered unsafe. We need to come up with different authentication methods.
I’d imagine that making it a user choice gets around some of the regulatory hurdles in some way. I can see them making a popup in the future to not use third-party cookies anymore (or partition per site them like Firefox does) but then they can say that it’s not Google making these changes, it’s the user making that choice. If you’re right that there’s few that would answer yes, then it gets them the same effective result for most users without being seen to force a change on their competitors in the ad industry.
What’s the UK CMA going to do, argue that users shouldn’t be given choices about how they are tracked or how their own browser operates?
The plan was only to kill off third-party cookies, not first-party so being able to log into stuff (and stay logged in) was not going to be affected. Most other browsers have already blocked or limited third-party cookies but most other browsers aren’t owned by a company that runs a dominant ad-tech business, so they can just make those changes without consulting anyone.
Also, it looks like there might be some kind of standard for federated login being worked on but I haven’t really investigated it: https://developer.mozilla.org/en-US/docs/Web/API/FedCM_API
They definitely knew it would impact their ad business but I think what did it was the competition authorities saying they couldn’t do it to their competitors either, even if they were willing to take the hit on their own services.
Impact on their business (bold added): https://support.google.com/admanager/answer/15189422
- Programmatic revenue impact without Privacy Sandbox: By comparing the control 2 arm to the control 1 arm, we observed that removing third-party cookies without enabling Privacy Sandbox led to -34% programmatic revenue for publishers on Google Ad Manager and -21% programmatic revenue for publishers on Google AdSense.
- Programmatic revenue impact with Privacy Sandbox: By comparing the treatment arm to control 1 arm, we observed that removing third-party cookies while enabling the Privacy Sandbox APIs led to -20% and -18% programmatic revenue for Google Ad Manager and Google AdSense publishers, respectively.
Looking at it most favorably, if they ever want to not be dependent on Google, they need revenue to replace what they get from Google and like it or not much of the money online comes from advertising. If they can find a way to get that money without being totally invasive on privacy, that’s still better than their current position.
For scenario one, they totally need to delete the data used for age verification after they collect it according to the law (unless another law says they have to keep it) and you can trust every company to follow the law.
For scenario two, that’s where the age verification requirements of the law come in.
No, no, no, it’s super secure you see, they have this in the law too:
Information collected for the purpose of determining a covered user’s age under paragraph (a) of subdivision one of this section shall not be used for any purpose other than age determination and shall be deleted immediately after an attempt to determine a covered user’s age, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
And they’ll totally never be hacked.
From the description of the bill law (bold added):
https://legislation.nysenate.gov/pdf/bills/2023/S7694A
To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general’s regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.
In other words: sites will have to figure it out and make sure that it’s both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.
Doesn’t necessarily need to be anyone with a lot of money, just a lot of people mass reporting things combined with automated systems.
It’s like an automated tipofmytongue but for everything you do on your computer.
I’m not sure I’m surprised at this point any more, just disappointed. All they have to do is just make a stable and secure platform to run apps on. They’re going to run out of foot to shoot themselves in sooner or later if they keep this kind of thing up. Too many unforced errors.
It should never have gotten to the external feedback stage because internal feedback should have been sufficient to kill the idea before it even got a name due to it being such a security and privacy risk. The fact that it didn’t is worrying from a management perspective.
To be fair to Microsoft, this was a local model too and encrypted (through Bitlocker). I just feel like the only way you could possibly even try to secure it would be to lock the user out of the data with some kind of separate storage and processing because anything the user can do can be done by malware run by the user. Even then, DRM and how it gets cracked has shown us that nothing like that is truly secure against motivated attackers. Since restricting a user’s access like that won’t happen and might not even be sufficient, it’s just way too risky.
It’s an improvement over the current systems. Incremental improvements to the state of things can be a good thing too.