It looks like the wave of campaigns against data centers are getting under big tech companies' skin - and Microsoft is the latest giant to promise to address frustrations on the ground in communiti...
It looks like the wave of campaigns against data centers are getting under big tech companies' skin - and Microsoft is the latest giant to promise to address frustrations on the ground in communities around their data centers. The company announced a five-point plan today that it calls "Community-First AI Infrastructure." That includes paying more […]
Topics:
science
tech
news
microsoft
environment
energy
technology
artificial-intelligence
ai
De nombreux fichiers sont présents en double exemplaire sur le disque, occupant de l’espace inutile. Que faire pour les localiser et comment les effacer ?
Topics:
linux
macos
windows
fichier en doublon
système
science
Our nearest neighbor, the moon, is still something of a mystery to us. For decades, scientists have wondered why it appears so lopsided, with dark volcanic plains on the near side (the side we see)...
Our nearest neighbor, the moon, is still something of a mystery to us. For decades, scientists have wondered why it appears so lopsided, with dark volcanic plains on the near side (the side we see) and rugged, cratered mountains and a thicker crust on the far side. Now we might be closer to knowing why.
STARGAZERS may soon have a chance to spot the "Great Comet of 2026" – potentially the brightest of the year. Comets are a night sky spectacle, soaring across the darkness in a blaze of glory. Last ...
STARGAZERS may soon have a chance to spot the "Great Comet of 2026" – potentially the brightest of the year. Comets are a night sky spectacle, soaring across the darkness in a blaze of glory. Last year saw several visible comets grace our skies, including Comet C/2025 A6 (Lemmon) (seen above and in this article's...
Topics:
observation
space
technology
astronomy
comets
science
Generative AI, LLMs, AI-assisted coding and such remind me of something I heard when I was learning Japanese almost 20 years ago – the “word processor syndrome” or “word processor language disorder...
Generative AI, LLMs, AI-assisted coding and such remind me of something I heard when I was learning Japanese almost 20 years ago – the “word processor syndrome” or “word processor language disorder”.
When I was learning the Japanese language, I remember being told about a concept called “word processor syndrome” or “word processor language disorder”. The premise was that the rise of electronic devices and word processors would lead to people becoming less proficient at writing Kanji (the Japanese pictographic script) though they would still be able to recognise and read Kanji well. They would just rely on computers (word processors, computers, mobile phones, etc.) to do the conversion while entering text and would only be skilled enough to pick the correct one from a set of options. The anxiety was that this woud impact traditional literacy and writing skills.
I see parallels to AI-based code generation here – there are many who seem to worry that relying on AI coding tools will impact our “traditional literacy and skills” to write code. There are others who are flag-bearers of new technology and are calling out the luddites for being stuck in the older ways. I think we probably still want to continue to learn how to program and design systems, but we will start to suffer from an “AI Coding Syndrome”. We will be able to prompt for some code to be generated and we will be able to read it and verify that it seems to be correct, but we might over time, lose the ability to write it easily and effectively ourselves. I see this at least for some of the simpler things already, e.g., create a rake task that does this and defaults to using these values, write code that exports the data to a database, create a parser that can handle this kind of text, and so on. It will increasingly be more productive to produce the code using a tool where less thinking is needed.
I do, however, agree with some who are concerned that a lot depends on what the models are trained on, and we will run out of quality code to train models on for newer things. We will have to wait and see how this evolves, but I expect that programming will continue to exist and people will still need to learn programming (and they should) to do a good job – just that the bar may be raised and it’ll be expected that people can achieve more in the working day.
For now, I feel that if I sign off some code or documentation, it is my responsiblity to assure people that it is of a certain quality – so, the ability to review code and ensure that it is fit for purpose still rests with me. Over time, it’s likely that the LLMs “writing” code will be able to provide that assurance and we will be able to trust the final output more – much as how almost no one checks if a C++ compiler produced the exact assembly correctly.
What do you think? If you have some comments, I’d love to hear from you. Feel free to connect or share the post (you can tag me as @onghu on X or on Mastodon as @[email protected] or @onghu.com on Bluesky to discuss more).
Face à la hausse des coûts et à la limitation des dépassements d’honoraires, les médecins libéraux alertent sur l’impossibilité d’investir dans l’innovation médicale en France.
Researchers have designed a new device that can efficiently create multiple frequency-entangled photons, a feat that cannot be achieved with today's optical devices. The new approach could open a p...
Researchers have designed a new device that can efficiently create multiple frequency-entangled photons, a feat that cannot be achieved with today's optical devices. The new approach could open a path to more powerful quantum communication and computing technologies.