

AI induced schizophrenia is so sad


AI induced schizophrenia is so sad


It’s all fictional, it can be whatever you want


Fine I with me


Ah yes the technology that hallucinated trench crusade lore when I asked it some questions will now be feeding bs to highly violent trained operatives obedient to the orange thing. Great


Cyberpunk. I’m putting my consciousness in the net


I live with the premise that I will die but I also live with the premise that I will be old for a significant portion of my life.
Personally I’m trying to maximize longevity but I’ll still do risky things if the consequences are things I feel I can deal with
This is 100% ai


Lol no it’s not is going to be fucking Elysium. And that’s only if they actually figure out how to do AGI not LLM bs
Just pour it over like olive oil on charcuterie and you’ll be good. Trust me
Although this is stupid you wouldn’t believe the amount of people I work with who are “highly educated” who just have the worst work process / ideas / work ethic.
The past two years of my life working in corporate has dramatically changed my overall views on average human intelligence.


I use it for tedious transformations or needle ones haystack problems.
They are better at searching for themes or concepts then they are at actually doing any “thinking tasks”. My rule is that if requires a lot of critical thinking then the LLM can do it.
It’s definitely not all they say it is. I think LLMs will fundamentally always have these problems.
I’ve actually had a much better time using it for in line completion as if recent. It’s much better when the scope of the problem it needs to “solve” ( the code it needs to find and compose to complete your line ) is like the Goldilocks zone. And if the answer it gives is bad I just keep typing.
I really hate the way LLM vibe coded slop is written and architecture. To me is clear these things have extremely limited conception. I’ve compared it to essentially ripping out the human language center, giving it a keyboard and asking it to program for you. It’s just no really what it’s good at.


YSK, there’s a large number of older Kindles that can be jail broken.
I just ssh pirated .mobi files into mine
Bro that’s a PRIME sailing planet if I’ve ever seen on.
Earths oceans shores are largely extremely boring linear beaches. Especially along the Atlantic.
This plant would be prime for small cheap hobby costal sailing
Is he threating nuclear war???


How do you determine who is able to do the job


It’s all I see slop enthusiasts go off of


I’m terrified I’ll never be able to find somewhere to work where this isn’t the case. Deeply terrified.


On the one hand software freedom.
On the other this has me thinking about how fascinating this problem is from academic standpoint.
How can you ensure software can ONLY run on the machines you allow? Even if the user has ring 0 access?
Is it mathematically impossible to achieve?
Any question that you could get the answer too from reading a sentence or two from a book or article is generally an LLM will probably be able to answer.
It’s good for surface level exploratory research.
The deeper you go the more fallacies and bias start to enter the mix. LLMs by nature are biased and will only present you with one or two solutions/options/opinions on why and what.
Knowing when your hitting that point is difficult. I urge you to read documentation and human written guides and articles once you find yourself surprising what the LLM can give you.
As a software developer and script writer myself I can guarantee you what a LLM writes is not “gold” it’s more like bronze at best.
I actually really like this, and it won’t slide down your wrist or hug it too tight either