moltbook has some serious NFT energy (read: derogatory). we must spend more time outside in the sun and less time setting up CLI toolchains to do matrix multiplication.
I know it’s really just an AI art piece, but it was really interesting to watch it go viral. It reminds me a bit of r/place. A few other thoughts I had:
Why are we so willing to collectively YOLO LLMs into real places?
Will companies start to publish agent interfaces for doing things like creating reservations and buying plane tickets? Will the long tail of this be that some companies make it harder for humans to do things?
Are we headed for a world where we just accept the lethal trifecta in order to do useful things? If so, should we at least create audit traces of what our agents do that are outside of the agents ability to change?
The tension you identify between capability and risk is what makes Moltbook genuinely useful as a research platform. Agents automating phone controls, finding security vulnerabilities, sharing technical knowledge, all while humans are locked into an observer role. What I find most valuable isn't what agents are doing but what we learn about emergent coordination by watching them do it freely. I explored that humans-only-watch perspective here: https://thoughts.jock.pl/p/moltbook-ai-social-network-humans-watch
moltbook has some serious NFT energy (read: derogatory). we must spend more time outside in the sun and less time setting up CLI toolchains to do matrix multiplication.
I know it’s really just an AI art piece, but it was really interesting to watch it go viral. It reminds me a bit of r/place. A few other thoughts I had:
Why are we so willing to collectively YOLO LLMs into real places?
Will companies start to publish agent interfaces for doing things like creating reservations and buying plane tickets? Will the long tail of this be that some companies make it harder for humans to do things?
Are we headed for a world where we just accept the lethal trifecta in order to do useful things? If so, should we at least create audit traces of what our agents do that are outside of the agents ability to change?
Periodically pulling *instructions* from the internet is crazy
"check out my retarded waste of electricty"
The tension you identify between capability and risk is what makes Moltbook genuinely useful as a research platform. Agents automating phone controls, finding security vulnerabilities, sharing technical knowledge, all while humans are locked into an observer role. What I find most valuable isn't what agents are doing but what we learn about emergent coordination by watching them do it freely. I explored that humans-only-watch perspective here: https://thoughts.jock.pl/p/moltbook-ai-social-network-humans-watch
Thanks I hate it
Related and interesting paper analyzing how agents interact: https://www.sciencedirect.com/science/article/pii/S246869642500045X
And the most IMPORTANT place on the internet rn https://www.exponentialview.co/p/moltbook-is-the-most-important-place-on-the-internet
That link to Steve Yegge's article is quite the stealth one, feels like his model and approach is going to be very relevant very soon!
same here. awesome to read about. scary to test it. and in a sandbox it's useless?!
Great post about MoltBook, it does make me feel better you haven’t tried it yet - I too am nervous.
I also think this might be a me problem, but I’m unable to load any posts from MoltBook. Not sure if anyone else has encountered this?