136 private links
This was quite a wild ride, while we should expect everything involving AI to be vulnerable by default, it still surprised us how many things we could find in such a short amount of time. While working on this piece of research, a lot of other people were looking into attacking MCP as well, which scared us, did they find what we found?
Hopefully, these frameworks will get some sane defaults that make it hard for developers to accidentally expose servers. And that vulnerabilities from the browser can be mitigated quickly as well. Until then, we hope you enjoyed this post and would love to hear your thoughts and ideas to take this stuff even further.
Last week I was talking to a friend who runs a small construction company. He was telling me about how all the big contractors in town are pushing "smart" building systems that require constant cloud connectivity and subscription services. Meanwhile, he's still using techniques that have worked for decades, tools he can fix himself, materials he understands completely.
"They keep telling me I'm behind the times" he said. "But when their fancy systems go down, who do they call?"
Maybe being "behind the times" isn't always a bad thing. Maybe sometimes it means you still own your tools instead of renting them.
The next time you catch yourself getting defensive about something - really defensive, like you're personally offended that someone would dare question it - maybe pause for a second. Ask yourself: am I defending this because it's actually good for me, or because I'm scared to imagine alternatives?
Because the first step toward freedom is always the same: admitting you might be wearing chains.
Many developers are terrified of losing their jobs for this very reason: AIs sometimes program better than them. And, in my opinion, they are right to be afraid. But I'm more afraid of a world (and not just in IT) where code will depend exclusively on the companies that sell us AIs.
Today, writing code is something free, potentially doable even on a beat-up laptop. But tomorrow? Will we be completely dependent on AIs (even) for this?
I will just have to concede that maybe I’m wrong. I don’t have the skill, or the knowledge, or the energy, to demonstrate with any level of rigor that LLMs are generally, in fact, hot garbage. Intellectually, I will have to acknowledge that maybe the boosters are right. Maybe it’ll be OK.
Maybe the carbon emissions aren’t so bad. Maybe everybody is keeping them secret in ways that they don’t for other types of datacenter for perfectly legitimate reasons. Maybe the tools really can write novel and correct code, and with a little more tweaking, it won’t be so difficult to get them to do it. Maybe by the time they become a mandatory condition of access to developer tools, they won’t be miserable.
Sure, I even sincerely agree, intellectual property really has been a pretty bad idea from the beginning. Maybe it’s OK that we’ve made an exception to those rules. The rules were stupid anyway, so what does it matter if we let a few billionaires break them? Really, everybody should be able to break them (although of course, regular people can’t, because we can’t afford the lawyers to fight off the MPAA and RIAA, but that’s a problem with the legal system, not tech).
I come not to praise “AI skepticism”, but to bury it.
Maybe it really is all going to be fine. Perhaps I am simply catastrophizing; I have been known to do that from time to time. I can even sort of believe it, in my head. Still, even after writing all this out, I can’t quite manage to believe it in the pit of my stomach.
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.
Ich bezweifle ja nicht, dass die Menschen der TU, der Uni-Wien, der öst. Akademie der Wissenschaften etc, die hier beteiligt sind, etwas Vernünftiges machen.
Aber alle Presseinfos zur „AI Factory“ lesen sich, als hätte man ChatGPT gesagt: „Bitte was mit BESONDERS viel nichtssagendem Bullshitbingo!“
A huge blocklist of manually curated sites (1000+) that contain AI generated content, for the purposes of cleaning image search engines (Google Search, DuckDuckGo, and Bing) with uBlock Origin or uBlacklist.
"The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists.
[...]
Many novelists have had the experience of being approached by someone convinced that they have a great idea for a novel, which they are willing to share in exchange for a fifty-fifty split of the proceeds. Such a person inadvertently reveals that they think formulating sentences is a nuisance rather than a fundamental part of storytelling in prose. Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium. But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art.
[...]
The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world."
he Signal founder stole the show with an opening chat laying out a case for reclaiming the "magic" of software development that's been lost after 20 years. That loss, he argued, was due to stuffing developers into "black box abstraction layers" that strip them of the freedom needed to be innovative.
"Anybody who is managing an engineering organization will have some kind of management philosophy that is in some way downstream of, derivative of, in the zone of, or somehow related to agile," Marlinspike said.
With so much automation available, it’s easier than ever for identity thieves to flood the employment market with their own versions of ghost jobs, in order to gather practically all the personal information a victim could ever provide.
Has anybody out there read or written anything substantial about the effect of AI taking over and basically destroying conventional hiring pipelines from both sides, to the point it feels functionally impossible for many hiring managers to hire people they don't already somehow know?
This seems important.
But Stephenson is far more pessimistic about today’s AI than he was about the Primer. “A chatbot is not an oracle,” he told me over Zoom last Friday. “It’s a statistics engine that creates sentences that sound accurate.” I spoke with Stephenson about his uncannily prescient book and the generative-AI revolution that has seemingly begun.
We had four lawyers, three privacy experts, and two campaigners look at Microsoft's new Service Agreement, and none of our experts could tell if Microsoft plans on using your personal data – including audio, video, chat, and attachments from 130 products, including Office, Skype, Teams, and Xbox – to train its AI models.
If nine experts in privacy can't understand what Microsoft does with your data, what chance does the average person have? That's why we're asking Microsoft to say if they're going to use our personal data to train its AI.
It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.
There are many things to loathe Sam Altman for — but not being enough of a cultist probably isn’t one of them.
We think more comedy gold will be falling out over the next week.
If a single role is as expensive as thousands of workers, it is surely the prime candidate for robot-induced redundancy.
I interpret this as a positive sign that common sense is returning to the UK.
Two things about “artificial intelligence.” It’s not artificial - it’s built on as much human activity as can be shoved into a
database. And it’s not intelligent - it is very fast manipulation of spreadsheets.
All tech news and especially social media posts are about hyperbole. Everything has to be the fastest, biggest, most growing and most disruptive. And when you don’t know about it or are not part of it, you are falling behind. And if that happens you will be irrelevant as a voice or outdated for the market. I’m sick of it and I am done with it.
Size isn’t everything. In fact, fast growth and inflated salaries are both red flags.
As Bender says, we've made "machines that can mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it." One potential tonic against this fallacy is to follow an Italian MP's suggestion and replace "AI" with "SALAMI" ("Systematic Approaches to Learning Algorithms and Machine Inferences"). It's a lot easier to keep a clear head when someone asks you, "Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?"