136 private links
Alle Marketing-Souverenitätsversprechen von #Microsoft an Europa mit paar wenigen Aussagen im französischen Senat zerlegt. Monsieur Carniaux arbeitet für Microsoft France.
Zur NSA/FISA wisse er nichts (das mag gut sein, weil hier die Geheimdienste direkte Kanäle haben zu den amerikanischen Konzernen). Ansonsten sagte er unter Eid aus, dass die Daten im Fall des Cloud Acts an die amerikanischen Behörden weitergegeben werden müssen (natürlich mit Gerichtsverfahren angestrengt usw).
https://www.senat.fr/compte-rendu-commissions/20250609/ce_commande_publique.html
- They employed folks like Nyquist and Shannon, who laid the foundations of modern information theory and electronic engineering while they were employees at Bell.
- They discovered the first evidence of the black hole at the center of our galaxy in the 1930s while analyzing static noise on shortwave transmissions.
- They developed in 1937 the first speech codec and the first speech synthesizer.
- They developed the photovoltaic cell in the 1940, and the first solar cell in the 1950s.
- They built the first transistor in 1947.
- They built the first large-scale electronic computers (from Model I in 1939 to Model VI in 1949).
-They employed Karnaugh in the 1950s, who worked on the Karnaugh maps that we still study in engineering while he was an employee at Bell. - They contributed in 1956 (together with AT&T and the British and Canadian telephone companies) to the first transatlantic communications cable.
-They developed the first electronic musics program in 1957.
-They employed Kernighan, Thompson and Ritchie, who created UNIX and the C programming language while they were Bell employees.
Many developers are terrified of losing their jobs for this very reason: AIs sometimes program better than them. And, in my opinion, they are right to be afraid. But I'm more afraid of a world (and not just in IT) where code will depend exclusively on the companies that sell us AIs.
Today, writing code is something free, potentially doable even on a beat-up laptop. But tomorrow? Will we be completely dependent on AIs (even) for this?
I will just have to concede that maybe I’m wrong. I don’t have the skill, or the knowledge, or the energy, to demonstrate with any level of rigor that LLMs are generally, in fact, hot garbage. Intellectually, I will have to acknowledge that maybe the boosters are right. Maybe it’ll be OK.
Maybe the carbon emissions aren’t so bad. Maybe everybody is keeping them secret in ways that they don’t for other types of datacenter for perfectly legitimate reasons. Maybe the tools really can write novel and correct code, and with a little more tweaking, it won’t be so difficult to get them to do it. Maybe by the time they become a mandatory condition of access to developer tools, they won’t be miserable.
Sure, I even sincerely agree, intellectual property really has been a pretty bad idea from the beginning. Maybe it’s OK that we’ve made an exception to those rules. The rules were stupid anyway, so what does it matter if we let a few billionaires break them? Really, everybody should be able to break them (although of course, regular people can’t, because we can’t afford the lawyers to fight off the MPAA and RIAA, but that’s a problem with the legal system, not tech).
I come not to praise “AI skepticism”, but to bury it.
Maybe it really is all going to be fine. Perhaps I am simply catastrophizing; I have been known to do that from time to time. I can even sort of believe it, in my head. Still, even after writing all this out, I can’t quite manage to believe it in the pit of my stomach.
A funny thing happened on the way to the future. The mainframe outlasted its replacements.
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
- Governments increasingly turn to DNS-level blocking interventions because they’re easy to implement.
- Study show a drop in usage of public DNS services is making DNS blocking efforts easier.
- Browser-based security, user-level content filtering, and greater transparency are alternate ways to implement censorship without erasing connections.
Go to the settings, select EQ Modes, and then tap the EQ Modes title more than 5 times to enable the vibration mode
OIDA represents a four-step process designed to guide analysts through the packet analysis journey:
Observe: Capture the right data at the right time and place.
Identify: Pinpoint the relevant information within the captured data.
Dissect: Break down the identified data for detailed examination.
Analyze: Draw meaningful conclusions from the dissected information.
The very short version: It has now become clear that European governments can no longer rely on American clouds, and that we lack good and comprehensive alternatives. Market forces have failed to deliver a truly European cloud, and businesses won’t naturally buy as yet unproven cloud services, even when adorned with a beautiful European 🇪🇺 flag, so for now nothing will happen.
So, CLOUDFLARE ANALYZED PASSWORDS PEOPLE ARE USING to LOG IN to sites THEY PROTECT and DISCOVERED lots of re-use.
A huge blocklist of manually curated sites (1000+) that contain AI generated content, for the purposes of cleaning image search engines (Google Search, DuckDuckGo, and Bing) with uBlock Origin or uBlacklist.
In this post, we first discuss the background of what microcode is, why microcode patches exist, why the integrity of microcode is important for security, and how AMD attempts to prevent tampering with microcode. Next, we focus on the microcode patch signature validation process and explain in detail the vulnerability present (using CMAC as a hash function). Finally, we discuss how to use some of the tools we've released today which can help researchers reproduce and expand on our work.
We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire large-scale US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds.
Emergent Harms in the Security Group
at the University of Cambridge have put out of a researcher paper around how over half of all cybercrime court cases in the UK are prosecuting UK police officers for abusing systems access.
Technologist Bert Hubert tells The Reg Microsoft Outlook is a huge source of geopolitical risk
How Breeze Liu, an advocate for digital abuse victims, got Microsoft to scrub 142 nonconsensual explicit images of her hosted on Azure after months of struggle.
Based on my research, the earliest computer to use the term "main frame" was the IBM 701 computer (1952), which consisted of boxes called "frames." The 701 system consisted of two power frames, a power distribution frame, an electrostatic storage frame, a drum frame, tape frames, and most importantly a main frame. The IBM 701's main frame is shown in the documentation below.
This paper presents an indirect methodology to assess IRQ overhead by constructing preliminary approaches to reduce the impact of IRQs. While these approaches are not suitable for general deployment, their corresponding performance observations indirectly confirm the conjecture. Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput without compromising tail latency
Open offices are hailed as hubs of collaboration and innovation. They are supposed to bring us together. Tear down the walls, the enthusiasts said, and watch collaboration flourish.
But what did we actually get?
A noise-filled stress and distraction factory where productivity plummets, introverts suffer, and sensory-sensitive neurodivergent talent is excluded