Most computers today, for all of their potential speed, are largely a mistake, based on the provenly unscalable Von Neumann architecture, controlled with one of the most shortsighted languages of all time, x86 assembly. They are almost unfathomably inefficient. Their processors have close to a billion transistors, most of which sit idle while a tiny fraction of a fraction of them perform some operation. Three quarters of a processor may be devoted to the quagmire of cache memory and its demands.
All of this brute force horsepower gets stacked in an ever higher tower of babel in the relentless race to perform more sequential calculations per second. If people only know what engineering was required to implement branch prediction and 20 stage deep pipelines… It’s like seeing being the walls of a meat packing plant. You just don’t want to know.
If you knew that your computer performed two or three hundred empty cycles waiting for some piece of data to be fetched from main memory on a cache miss, or that when you see the little spinny thing, you are actually waiting for your hard drive to track down dozens of fragments of a file scattered across the hard disk drive because it got too full that one time, or that your web browser locked up on you because some novice programmer wrote some portion of it in blocking network code that is waiting for the last byte to arrive from the web server, and that the web server is sending that byte over and over again because a router is temporarily overloaded and is dropping packets like crazy so your neighbor can download a youtube clip of a cat hitting a ball into its owner’s crotch, you might throw up in your mouth a little bit.
Sure, your computer can perform 10 billion floating point operations per second. But most of the time it’s not doing anything at all. Just like you.