

They’re probably trying to write it in a way that non-Rust-developers can understand.
Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb


They’re probably trying to write it in a way that non-Rust-developers can understand.


Companies that build large LLMs have already said that this is becoming a problem. They’re running out of high-quality human-written content to train their models.
Google paid Reddit to get access to their data to train their models, which is probably why their AI can be a bit dumb at times (and of course, the users that actually contributed the content don’t get any of that money)


It sounds like you can just log out and back in to fix it? For a local system, the article says it only occurs for “First time user logon after a cumulative update was applied.”
The top comments don’t look too bad now… Maybe they’re ranked differently or something
TIL there’s a name for this
I’m a fan of BunnyCDN - somehow they’re one of the fastest while also being one of the cheapest, and they’re based in Europe (Slovenia).
KeyCDN is good too, and they’re also Europe-based (Switzerland), but they have a higher minimum monthly spend of $4 instead of $1 at Bunny.
Fastly have a free tier with 100GB per month, but bandwidth pricing is noticeably higher than Bunny and KeyCDN once you exceed that.
https://www.cdnperf.com/ is useful for comparing performance. They don’t list every CDN though.
Some CDN providers are focused only on large enterprise customers, and it shows in their pricing.
Companies like OVH have good DDoS protection too.
there really isn’t much in the way of an alternative
Bunny.net covers some of the use cases, like DNS and CDN. I think they just rolled out a WAF too.
There’s also the “traditional” providers like AWS, Akamai, etc. and CDN providers like KeyCDN and CDN77.
I guess one of the appeals of Cloudflare is that it’s one provider for everything, rather than having to use a few different providers?
This can happen regardless of language.
The actual issue is that they should be canarying changes. Push them to a small percentage of servers, and ensure nothing bad happens before pushing them more broadly. At my workplace, config changes are automatically tested on one server, then an entire rack, then an entire cluster, before fully rolling out. The rollout process watches the core logs for things like elevated HTTP 5xx errors.
Did you read the article? It wasn’t taken down by the number of bots, but by the number of columns:
In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features. Again, the limit exists because for performance reasons we preallocate memory for the features.
When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking.
They had some code to get a list of the database columns in the schema, but it accidentally wasn’t filtering by database name. This worked fine initially because the database user only had access to one DB. When the user was granted access to another DB, it started seeing way more columns than it expected.
When are people going to realise that routing a huge chunk of the internet through one private company is a bad idea? The entire point of the internet is that it’s a decentralized network of networks.
Just net send everyone a message saying that if they have issues, they need to reboot.
(is net send still a thing?)