“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”
Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0
Okay, I’m starting to think this article doesn’t really know what it’s talking about…
For most of modern computing history, however, analog technology has been written off as an impractical alternative to digital processors. This is because analog systems rely on continuous physical signals to process information — for example, a voltage or electric current. These are much more difficult to control precisely than the two stable states (1 and 0) that digital computers have to work with.
1 and 0 are in fact representative of voltages in digital computers. Typically, on a standard IBM PC, you have 3.3V, 5V and 12V, also negative voltages of these levels, and a 0 will be a representation of zero volts while a 1 will be one of those specified voltages. When you look at the actual voltage waveforms, it isn’t really digital but analogue, with a transient wave as the voltage changes from 0 to 1 and vice versa. It’s not really a solid square step, but a slope that passes a pickup or dropoff before reaching the nominal voltage level. So a digital computer is basically the same as how they’re describing an analogue computer.
I’m sure there is something different and novel about this study, but the article doesn’t seem to have a clue what that is.
To be clear though, the two defined states are separated by a voltage gap, so either it is on or off regardless of how on or how off. For example, if the off is 0V and the on is 5V then 4V is neither of those but will be either considered as on. So if it is above thecriticam threshold it is on and therefore represents a 1, otherwise it is a 0.
An analogue computer would be able to use all of the variable voltage range. This means that instead of having a whole bunch of gates working together to represent a number the voltage could be higher or lower. Something that takes 64 bits could be a single voltage. That would mean more processing in the same space and much less actual computation required.
This is an analog pc: https://en.wikipedia.org/wiki/Analog_computer
https://en.wikipedia.org/wiki/Vacuum-tube_computer
It does seem to be talking about this, analog doesn’t from my understanding use 1 or 0 as a representation. It is true that the cpu uses voltage as you stated, but what differentiates it from analog is that in analog the volatge isn’t represented as 0 or 1 and is used as is in calculations.
They are not programmed, they are physically made to preform the calculation from my understanding, like for example the https://en.wikipedia.org/wiki/Antikythera_mechanism
Normal one and zero transistors can hold their state for a while only needing refresh cycles at intervals.
Seems logical to me that it’s harder to hold values of greater variance, which is probably also why everything works with binary systems, and not a single vendor has chips that use bits with for instance 3 or 4 states.
What would be most obvious if this wasn’t a problem would be to make a decimal based computer. There’s a reason we don’t have that, except by using 4 bits wasting 6 values, which is very wasteful.
Look, It’s one of those articles again. The bi-monthly “China invents earth-shattering technology breakthrough that we never hear about again.”
“1000x faster?” Learn to lie better. Real technological improvements are almost always incremental, like “10-20% faster, bigger, stronger.” Not 1000 freaking times faster. You lie like a child. Or like Trump.
“1000x faster?” Learn to lie better
Analogue computers are indeed capable of doing a task 1000x faster than a regular computer. The difference is they do only that task, in a very specific way, and with one specific type of output. You can 3D print at home an “analogue computer” that can solve calculus equations, it can technically be faster than a CPU, but that’s the only thing it can do, it’s complex, and the output is a drawing on paper.
If you come up with a repeatable and precise set of mechanical movements that are analogous to the problem you want to solve, you can indeed come up with headlines like that.
Because until it hits market, it’s almost meaningless. These journalists do the same shit with drugs in trials or early research.
I agree that before it’s a company selling a product it’s just dreams.
However this is serious research. Skip the journo and open the nature.com link to the scientific article.
For the ones not familiar with nature, it’s a highly regarded scientific magazine. Articles are written by researchers not journalists.
The Nature paper says they’ve done a proof of concept with a few bits, and concluded that they can reproduce it with cutting edge processors. That’s akin to ‘Mice survive cancer longer’ becoming ‘We’ve cured cancer forever’.
They might be right, but I’m not holding my breath.
AI hype in a nutshell
It can be 1000x faster because it analog. Analog things take very very little time to compute stuff. We don’t generally use them because they are very hard to get the same result twice and updating is also hard
The fun thing is, for LLM you don’t need perfectly repeatable result. It won’t speed up training but running the chips could be significantly cheaper with that kind of tech. Veritasium had a video about it a couple of years back, before the ai craze.
link thanks
" Future Computers Will Be Radically Different (Analog Computing)" - https://www.youtube.com/watch?v=GVsUOuSjvcg
Analog is literally computing on the fabric of the universe.
Pdf link for the lazy 💛
> See article preview image > AI crap CPU > Leaves immediatelyWhich is worse - AI slop, or people decrying everything they see as AI slop, even when it isn’t?
Maybe they’re about to solder it on “dead-bug” style? lol

Hmm I see.
I did some further research because I didn’t know any CPU that looks like that and this is probably an Intel Core2Duo processor from before 2009 lol
(x) Doubt.
Same here. I wait to see real life calculations done by such circuits. They won’t be able to e.g. do a simple float addition without losing/mangling a bunch of digits.
But maybe the analog precision is sufficient for AI, which is an imprecise matter from the start.
Wouldn’t analog be a lot more precise?
Accurate, though, that’s a different story…
No, it wouldn’t. Because you cannot make it reproduceable on that scale.
Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.
The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.
In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.
What does this mean, in practice? In what application does that precision show its benefit? Crazy math?
Every operation your computer does. From displaying images on a screen to securely connecting to your bank.
It’s an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.
They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There’s no practical purpose for them now.
In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.
If you’re asking about the 24-bit ADC, I think that’s usually high-end audio recording.
sounds like bullshit.
read the paper
The paper doesn’t even claim they achieved it. They only say it could potentially reach it
The problem is with the clickbait headline (on livescience.com), not the paper itself.
And it’ll be on sale through Temu and Wish.com
cool
It uses 1% of the energy but is still 1000x faster than our current fastest cards? Yea, I’m calling bullshit. It’s either a one off, bullshit, or the next industrial revolution.
EDIT: Also, why do articles insist on using ##x less? You can just say it uses 1% of the energy. It’s so much easier to understand.
I would imagine there’s a kernel of truth to it. It’s probably correct, but for one rarely used operation, or something like that. It’s not a total revolution. It’s something that could be included to speed up a very particular task. Like GPUs are much better at matrix math than the CPU, so we often have that in addition to the CPU, which can handle all tasks, but isn’t as fast for those particular ones.
I mean it‘s like the 10th time I‘m reading about THE breakthrough in Chinese chip production on Lemmy so lets just say I‘m not holding my breath LoL.
Yeah it’s like reading about North American battery science. Like yeah ok cool, see you in 30 years when you’re maybe production ready
But it only does 16x16 matrix inversion.
Oh noes, how could that -possibly- scale?
To a billion parameter matrix inverter? Probably not too hard, maybe not at those speeds.
To a GPU, or even just the functions used in GenAI? We don’t even know if those are possible with analog computers to begin with.
@TheBlackLounge @kalkulat LLM inference is definitely theoretically possible on analog chips. They just may not scale :v
https://www.nature.com/articles/s41928-025-01477-0
Here’s the paper published in Nature.
However, it’s worth noting that Nature has had to retract studies before:
https://en.wikipedia.org/wiki/Nature_(journal)#Retractions
From 2000 to 2001, a series of five fraudulent papers by Jan Hendrik Schön was published in Nature. The papers, about semiconductors, were revealed to contain falsified data and other scientific fraud. In 2003, Nature retracted the papers. The Schön scandal was not limited to Nature; other prominent journals, such as Science and Physical Review, also retracted papers by Schön.
Not saying that we shouldn’t trust anything published in scientific journals, but yes, we should wait until more studies that replicate these results exist before jumping to conclusions.
They’re real, but they aren’t general purpose and lack precision. It’s just analog.
coming from china, more like 1 -off bs, with nothing to backup on.
It’s a weird damn lie if it is.
And the death of the American economy if it isn’t, fingers crossed.
As someone with a 401k I really hope it isn’t.
The economy crashing won’t hurt billionaires but will kill the middle class.
If anything the economy crashing will allow the 0.1% to buy up anything they haven’t gotten already.
Yeah this is literally what happened in 2008. Economic instability stopped banks from lending to would be individual home buyers, but corpos bought up everything they could eagerly with a 20% price cut.
Economic instability is generally better for the people who can weather the storm, i.e. those with resources to spare, because (as you say) they can buy assets on the cheap when the less fortunate run out of cash to survive on and have to liquidate.
It’s long periods of stability that seem to let the lower classes build up a little. Yet another reason why war and strife is of benefit to the rich.
And now you see why they want to crash the economy.
What middle class? 🤔
The one so worried about their 401Ks they won’t risk the ire of the rich.
For the love of Christ this thumbnail is triggering, lol
Just push ever so slightly more when you hear the crunching sounds.
Then apply thermal paste generously
Pour a bucket of water over it for liquid cooling
Why? It’s standard socket in SMOBO design (sandwich Motherboard).
The CPU is upside down you dork.
Let me guess, you think The Onion is a real newspaper, right?
I mean, in 2025 it’s basically a preview. It’s gone from to satire to prophecy.
Ahh yeah and we should 1. Believe this exists 2. Believe that china doesnt think technology of this caliber isnt a matter of national security
1000x!
Is this like medical articles about major cancer discoveries?
yes, except the bullshit cancer discoveries are always in Israel, and the bullshit chip designs are in china.
1000x yes!
This seems like promising technology, but the figures they are providing are almost certainly fiction.
This has all the hallmarks of a team of researchers looking to score an R&D budget.
This was bound to happen. Neural networks are inherently analog processes, simulating them digitally is massively expensive in terms of hardware and power.
Digital domain is good for exact computation, analog is better for approximate computation, as required by neural networks.
You might benefit from watching Hinton’s lecture; much of it details technical reasons why digital is much much better than analog for intelligent systems
BTW that is the opposite of what he set out to prove He says the facts forced him to change his mind
Thank you for the link, it was very interesting.
Even though analogue neural networks have the drawback that you can’t copy the neuron weights (currently, but tech may evolve to do it), they can still have use cases in lower powered edge devices.
I think we’ll probably end up with hybrid designs, using digital for most parts except the calculations.
For low power neural nets look up “spiking neural networks”
much of it details technical reasons why digital is much much better than analog for intelligent systems
For current LLMs there would be a massive gain in energy efficiency if analogue computing was used. Much of the current energy costs come from stimulating what effectively analogue processing on digital hardware. There’s a lot lost in the conversation, or “emulation” of analogue.
I wish researchers like Hinton would stick to discussing the tech. Anytime he says anything about linguistics or human intelligence he sounds like a CS major smugly raising his hand in Phil 101 to a symphony of collective groans.
Hinton is a good computer scientist (with an infinitesimally narrow window of expertise). But the guy is philosophically illiterate.
That and the way companies have been building AI they have been doing so little to optimize compute to instead try to get the research out faster because that’s what is expected in this bubble. I’m absolutely fully expecting to see future research finding plenty of ways to optimize these major models.
But also R&D has been entirely focused on digital chips I would not be at all surprised if there were performance and/or efficiency gains to be had in certain workloads by shifting to analog circuits
That’s a good point. The model weights could be voltage levels instead of digital representations. Lots of audio tech uses analog for better fidelity.I also read that there’s a startup using particle beams for lithography. Exciting times.
At least one Nobel Laureate has exactly the opposite opinion (see the Hinton lecture above)
what audio tech uses analog for better fidelity?
Vinyl records, analog tube amplifiers, a good pair of speakers 🤌
Honestly though digital compression now is so good it probably sounds the same.
speakers are analog devices by nature.
The other two are used for the distortions they introduce, so quite literally lower fidelity. Whether some people like those distortions is irrelevant.
You want high fidelity: lossless digital audio formats.
Yeah, I get very good sound out of class d amplifiers. They’re cheap; they’re energy efficient, and they usually pack in features for digital formats because it’s easy to do.
I hope Nvidia stock drops 10% so I can buy more.
Actually its so high up now, I think losing 10% isnt enough for it to look like a good buy.
it would need to drop a lot more than 10%




















