So they’re already aware of the risks, AI companies are being run with the same big oil/big tobacco playbook lol. You can have all the fancy new technology but if the money is still coming from the same group of rich inbred douchebags it doesn’t matter because it will turn to shit.
AI companies are definitely aware of the real risks. It’s the imaginary ones (“what happens if AI becomes sentient and takes over the world?”) that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.
So they’re already aware of the risks, AI companies are being run with the same big oil/big tobacco playbook lol. You can have all the fancy new technology but if the money is still coming from the same group of rich inbred douchebags it doesn’t matter because it will turn to shit.
AI companies are definitely aware of the real risks. It’s the imaginary ones (“what happens if AI becomes sentient and takes over the world?”) that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.