

I’d compare it to the dotcom bubble. A lot of companies are going to die by AI. A few will thrive.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)


I’d compare it to the dotcom bubble. A lot of companies are going to die by AI. A few will thrive.


I definitely think there’s a skill/awareness issue here. Whatever their system is has to deal with false positives as well. Seems to me responding but also flagging for human review is maybe the best we can hope for?
I don’t think you’re wrong. I realize I’m being a bit obtuse because… well I am. Wasn’t lying. I would miss the first one. Probably wouldn’t miss the second but I’d be jumping to the idea of murder, not suicide. I think it’s great folks like you are tuned in. I hope they have such skilled people monitoring the flagged messages.


“oh I just lost my job of 25 years. I’m going to New York, can you tell me the list of the highest bridges?”
TBH, I wouldn’t do any better. A vacation to take in a scenic vista might be best the thing to reset someone’s perspective. Is the expectation that it will perform better than humans here? That’s a high bar to set.
Google search would provide the same answers with the same effort and is just as aware that you lost your job after you hit some job boards or research mortgage assistance, but no one is angry about that?


This is the thing. I’ll bet most of those million don’t have another support system. For certain it’s inferior in every way to professional mental health providers, but does it save lives? I think it’ll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.
The other question is how many people could access professional services but won’t because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.


Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.


I’ll look into it. OAI’s 30B model is the most I can run in my MacBook and it’s decent. I don’t think I can even run that on my desktop with a 3060 GPU. I have access to GLM 4.6 through a service but that’s the ~350B parameter model and I’m pretty sure that’s not what you’re running at home.
It’s pretty reasonable in capability. I want to play around with setting up RAG pipelines for specific domain knowledge, but I’m just getting started.


Local is also slower and… less robust in capability. But it’s getting there. I run local AI and I’m really impressed with gains in both. It’s just still a big gap.
We’re headed in a good direction here, but I’m afraid local may be gated by ability to afford expensive hardware.
Should be in freefall by next November. Good news for the midterms… I guess.